Test Report: Docker_Linux_docker_arm64 19636

                    
                      a6feba20ebb4dc887776b248ea5c810d31cc7846:2024-09-13:36198
                    
                

Test fail (1/342)

Order failed test Duration
33 TestAddons/parallel/Registry 75.21
x
+
TestAddons/parallel/Registry (75.21s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 6.078127ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-btz86" [71148c5e-7525-45fb-8380-24b29240e9e4] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003864331s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-ftdlk" [6e2bf204-eddc-452f-8693-4f930b88a93b] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004319086s
addons_test.go:338: (dbg) Run:  kubectl --context addons-751971 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-751971 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-751971 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.127292515s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-751971 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-linux-arm64 -p addons-751971 ip
2024/09/13 18:34:40 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-arm64 -p addons-751971 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-751971
helpers_test.go:235: (dbg) docker inspect addons-751971:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e9491289a299d1ec17346db74390971c6a194b46b144bb3d1bf54db26010c6b1",
	        "Created": "2024-09-13T18:21:24.501756394Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 8829,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-13T18:21:24.67553381Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:7fd83909ee30d45ee853480d01e762968b1b9847bff4690fcb8ae034ea6e4a6b",
	        "ResolvConfPath": "/var/lib/docker/containers/e9491289a299d1ec17346db74390971c6a194b46b144bb3d1bf54db26010c6b1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e9491289a299d1ec17346db74390971c6a194b46b144bb3d1bf54db26010c6b1/hostname",
	        "HostsPath": "/var/lib/docker/containers/e9491289a299d1ec17346db74390971c6a194b46b144bb3d1bf54db26010c6b1/hosts",
	        "LogPath": "/var/lib/docker/containers/e9491289a299d1ec17346db74390971c6a194b46b144bb3d1bf54db26010c6b1/e9491289a299d1ec17346db74390971c6a194b46b144bb3d1bf54db26010c6b1-json.log",
	        "Name": "/addons-751971",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-751971:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-751971",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/8935cb269dbb1974b059f9edbb106d79c6a37a9045670ad971a8bb9504e59190-init/diff:/var/lib/docker/overlay2/5031f18bf9c6ac943e852815eecef7e600d2b873b27e1736f52a418bd36e5c66/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8935cb269dbb1974b059f9edbb106d79c6a37a9045670ad971a8bb9504e59190/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8935cb269dbb1974b059f9edbb106d79c6a37a9045670ad971a8bb9504e59190/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8935cb269dbb1974b059f9edbb106d79c6a37a9045670ad971a8bb9504e59190/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-751971",
	                "Source": "/var/lib/docker/volumes/addons-751971/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-751971",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-751971",
	                "name.minikube.sigs.k8s.io": "addons-751971",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cd9977e0d410afd6483a3f5e285d9945089a1fece0c75d2e00cbb68e1357e6e0",
	            "SandboxKey": "/var/run/docker/netns/cd9977e0d410",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-751971": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "130c5420be8c94d6b7270c0e5123fa1bdc4145984fbf86960ff35c62475e678e",
	                    "EndpointID": "83c1ffb1acb246bff6db29637779043e0a9ca71025ba50fa9b95984969d2b00c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-751971",
	                        "e9491289a299"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-751971 -n addons-751971
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-751971 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-751971 logs -n 25: (1.385362404s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-518803   | jenkins | v1.34.0 | 13 Sep 24 18:20 UTC |                     |
	|         | -p download-only-518803              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 13 Sep 24 18:20 UTC | 13 Sep 24 18:20 UTC |
	| delete  | -p download-only-518803              | download-only-518803   | jenkins | v1.34.0 | 13 Sep 24 18:20 UTC | 13 Sep 24 18:20 UTC |
	| start   | -o=json --download-only              | download-only-650419   | jenkins | v1.34.0 | 13 Sep 24 18:20 UTC |                     |
	|         | -p download-only-650419              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 13 Sep 24 18:20 UTC | 13 Sep 24 18:20 UTC |
	| delete  | -p download-only-650419              | download-only-650419   | jenkins | v1.34.0 | 13 Sep 24 18:20 UTC | 13 Sep 24 18:20 UTC |
	| delete  | -p download-only-518803              | download-only-518803   | jenkins | v1.34.0 | 13 Sep 24 18:20 UTC | 13 Sep 24 18:20 UTC |
	| delete  | -p download-only-650419              | download-only-650419   | jenkins | v1.34.0 | 13 Sep 24 18:20 UTC | 13 Sep 24 18:20 UTC |
	| start   | --download-only -p                   | download-docker-557017 | jenkins | v1.34.0 | 13 Sep 24 18:21 UTC |                     |
	|         | download-docker-557017               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | -p download-docker-557017            | download-docker-557017 | jenkins | v1.34.0 | 13 Sep 24 18:21 UTC | 13 Sep 24 18:21 UTC |
	| start   | --download-only -p                   | binary-mirror-528081   | jenkins | v1.34.0 | 13 Sep 24 18:21 UTC |                     |
	|         | binary-mirror-528081                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:46241               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-528081              | binary-mirror-528081   | jenkins | v1.34.0 | 13 Sep 24 18:21 UTC | 13 Sep 24 18:21 UTC |
	| addons  | enable dashboard -p                  | addons-751971          | jenkins | v1.34.0 | 13 Sep 24 18:21 UTC |                     |
	|         | addons-751971                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-751971          | jenkins | v1.34.0 | 13 Sep 24 18:21 UTC |                     |
	|         | addons-751971                        |                        |         |         |                     |                     |
	| start   | -p addons-751971 --wait=true         | addons-751971          | jenkins | v1.34.0 | 13 Sep 24 18:21 UTC | 13 Sep 24 18:24 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	| addons  | addons-751971 addons disable         | addons-751971          | jenkins | v1.34.0 | 13 Sep 24 18:25 UTC | 13 Sep 24 18:25 UTC |
	|         | volcano --alsologtostderr -v=1       |                        |         |         |                     |                     |
	| addons  | addons-751971 addons disable         | addons-751971          | jenkins | v1.34.0 | 13 Sep 24 18:33 UTC | 13 Sep 24 18:33 UTC |
	|         | yakd --alsologtostderr -v=1          |                        |         |         |                     |                     |
	| addons  | addons-751971 addons                 | addons-751971          | jenkins | v1.34.0 | 13 Sep 24 18:34 UTC | 13 Sep 24 18:34 UTC |
	|         | disable csi-hostpath-driver          |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-751971 addons                 | addons-751971          | jenkins | v1.34.0 | 13 Sep 24 18:34 UTC | 13 Sep 24 18:34 UTC |
	|         | disable volumesnapshots              |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin         | addons-751971          | jenkins | v1.34.0 | 13 Sep 24 18:34 UTC | 13 Sep 24 18:34 UTC |
	|         | -p addons-751971                     |                        |         |         |                     |                     |
	| ip      | addons-751971 ip                     | addons-751971          | jenkins | v1.34.0 | 13 Sep 24 18:34 UTC | 13 Sep 24 18:34 UTC |
	| addons  | addons-751971 addons disable         | addons-751971          | jenkins | v1.34.0 | 13 Sep 24 18:34 UTC | 13 Sep 24 18:34 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 18:21:01
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 18:21:01.580865    8332 out.go:345] Setting OutFile to fd 1 ...
	I0913 18:21:01.580995    8332 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:21:01.581007    8332 out.go:358] Setting ErrFile to fd 2...
	I0913 18:21:01.581014    8332 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:21:01.581280    8332 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-2205/.minikube/bin
	I0913 18:21:01.581783    8332 out.go:352] Setting JSON to false
	I0913 18:21:01.582569    8332 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":209,"bootTime":1726251453,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0913 18:21:01.582652    8332 start.go:139] virtualization:  
	I0913 18:21:01.587681    8332 out.go:177] * [addons-751971] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0913 18:21:01.591054    8332 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 18:21:01.591111    8332 notify.go:220] Checking for updates...
	I0913 18:21:01.597115    8332 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 18:21:01.600118    8332 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19636-2205/kubeconfig
	I0913 18:21:01.602593    8332 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-2205/.minikube
	I0913 18:21:01.605631    8332 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0913 18:21:01.608674    8332 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 18:21:01.611648    8332 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 18:21:01.639604    8332 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0913 18:21:01.639736    8332 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0913 18:21:01.696487    8332 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-13 18:21:01.687180979 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0913 18:21:01.696597    8332 docker.go:318] overlay module found
	I0913 18:21:01.701002    8332 out.go:177] * Using the docker driver based on user configuration
	I0913 18:21:01.703430    8332 start.go:297] selected driver: docker
	I0913 18:21:01.703458    8332 start.go:901] validating driver "docker" against <nil>
	I0913 18:21:01.703471    8332 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 18:21:01.704207    8332 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0913 18:21:01.757834    8332 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-13 18:21:01.748480835 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0913 18:21:01.758103    8332 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 18:21:01.758330    8332 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 18:21:01.761040    8332 out.go:177] * Using Docker driver with root privileges
	I0913 18:21:01.763339    8332 cni.go:84] Creating CNI manager for ""
	I0913 18:21:01.763415    8332 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 18:21:01.763426    8332 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0913 18:21:01.763526    8332 start.go:340] cluster config:
	{Name:addons-751971 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-751971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 18:21:01.766451    8332 out.go:177] * Starting "addons-751971" primary control-plane node in "addons-751971" cluster
	I0913 18:21:01.769070    8332 cache.go:121] Beginning downloading kic base image for docker with docker
	I0913 18:21:01.771351    8332 out.go:177] * Pulling base image v0.0.45-1726193793-19634 ...
	I0913 18:21:01.773942    8332 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 18:21:01.774000    8332 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19636-2205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0913 18:21:01.774025    8332 cache.go:56] Caching tarball of preloaded images
	I0913 18:21:01.774032    8332 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e in local docker daemon
	I0913 18:21:01.774128    8332 preload.go:172] Found /home/jenkins/minikube-integration/19636-2205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 18:21:01.774146    8332 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0913 18:21:01.774487    8332 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/addons-751971/config.json ...
	I0913 18:21:01.774515    8332 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/addons-751971/config.json: {Name:mk99899eb838a8d78117dbef052feddfa62ba877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:21:01.789909    8332 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e to local cache
	I0913 18:21:01.790035    8332 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e in local cache directory
	I0913 18:21:01.790100    8332 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e in local cache directory, skipping pull
	I0913 18:21:01.790105    8332 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e exists in cache, skipping pull
	I0913 18:21:01.790112    8332 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e as a tarball
	I0913 18:21:01.790118    8332 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e from local cache
	I0913 18:21:19.113943    8332 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e from cached tarball
	I0913 18:21:19.113982    8332 cache.go:194] Successfully downloaded all kic artifacts
	I0913 18:21:19.114025    8332 start.go:360] acquireMachinesLock for addons-751971: {Name:mk0f74b289b230071efc6a5f366851cbe08c007e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 18:21:19.114150    8332 start.go:364] duration metric: took 93.038µs to acquireMachinesLock for "addons-751971"
	I0913 18:21:19.114180    8332 start.go:93] Provisioning new machine with config: &{Name:addons-751971 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-751971 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 18:21:19.114260    8332 start.go:125] createHost starting for "" (driver="docker")
	I0913 18:21:19.116738    8332 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0913 18:21:19.116988    8332 start.go:159] libmachine.API.Create for "addons-751971" (driver="docker")
	I0913 18:21:19.117038    8332 client.go:168] LocalClient.Create starting
	I0913 18:21:19.117154    8332 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19636-2205/.minikube/certs/ca.pem
	I0913 18:21:19.253176    8332 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19636-2205/.minikube/certs/cert.pem
	I0913 18:21:19.446548    8332 cli_runner.go:164] Run: docker network inspect addons-751971 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0913 18:21:19.461920    8332 cli_runner.go:211] docker network inspect addons-751971 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0913 18:21:19.461999    8332 network_create.go:284] running [docker network inspect addons-751971] to gather additional debugging logs...
	I0913 18:21:19.462019    8332 cli_runner.go:164] Run: docker network inspect addons-751971
	W0913 18:21:19.477261    8332 cli_runner.go:211] docker network inspect addons-751971 returned with exit code 1
	I0913 18:21:19.477293    8332 network_create.go:287] error running [docker network inspect addons-751971]: docker network inspect addons-751971: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-751971 not found
	I0913 18:21:19.477305    8332 network_create.go:289] output of [docker network inspect addons-751971]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-751971 not found
	
	** /stderr **
	I0913 18:21:19.477400    8332 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0913 18:21:19.496739    8332 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019b0a90}
	I0913 18:21:19.496783    8332 network_create.go:124] attempt to create docker network addons-751971 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0913 18:21:19.496837    8332 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-751971 addons-751971
	I0913 18:21:19.567843    8332 network_create.go:108] docker network addons-751971 192.168.49.0/24 created
	I0913 18:21:19.567876    8332 kic.go:121] calculated static IP "192.168.49.2" for the "addons-751971" container
	I0913 18:21:19.567958    8332 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0913 18:21:19.584369    8332 cli_runner.go:164] Run: docker volume create addons-751971 --label name.minikube.sigs.k8s.io=addons-751971 --label created_by.minikube.sigs.k8s.io=true
	I0913 18:21:19.602364    8332 oci.go:103] Successfully created a docker volume addons-751971
	I0913 18:21:19.602474    8332 cli_runner.go:164] Run: docker run --rm --name addons-751971-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-751971 --entrypoint /usr/bin/test -v addons-751971:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e -d /var/lib
	I0913 18:21:20.667141    8332 cli_runner.go:217] Completed: docker run --rm --name addons-751971-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-751971 --entrypoint /usr/bin/test -v addons-751971:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e -d /var/lib: (1.064626177s)
	I0913 18:21:20.667168    8332 oci.go:107] Successfully prepared a docker volume addons-751971
	I0913 18:21:20.667203    8332 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 18:21:20.667221    8332 kic.go:194] Starting extracting preloaded images to volume ...
	I0913 18:21:20.667305    8332 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19636-2205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-751971:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e -I lz4 -xf /preloaded.tar -C /extractDir
	I0913 18:21:24.431571    8332 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19636-2205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-751971:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e -I lz4 -xf /preloaded.tar -C /extractDir: (3.76422288s)
	I0913 18:21:24.431606    8332 kic.go:203] duration metric: took 3.764380936s to extract preloaded images to volume ...
	W0913 18:21:24.431859    8332 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0913 18:21:24.432004    8332 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0913 18:21:24.486704    8332 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-751971 --name addons-751971 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-751971 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-751971 --network addons-751971 --ip 192.168.49.2 --volume addons-751971:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e
	I0913 18:21:24.852734    8332 cli_runner.go:164] Run: docker container inspect addons-751971 --format={{.State.Running}}
	I0913 18:21:24.875861    8332 cli_runner.go:164] Run: docker container inspect addons-751971 --format={{.State.Status}}
	I0913 18:21:24.897474    8332 cli_runner.go:164] Run: docker exec addons-751971 stat /var/lib/dpkg/alternatives/iptables
	I0913 18:21:24.969901    8332 oci.go:144] the created container "addons-751971" has a running status.
	I0913 18:21:24.969945    8332 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19636-2205/.minikube/machines/addons-751971/id_rsa...
	I0913 18:21:25.196423    8332 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19636-2205/.minikube/machines/addons-751971/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0913 18:21:25.217259    8332 cli_runner.go:164] Run: docker container inspect addons-751971 --format={{.State.Status}}
	I0913 18:21:25.240063    8332 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0913 18:21:25.240088    8332 kic_runner.go:114] Args: [docker exec --privileged addons-751971 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0913 18:21:25.330789    8332 cli_runner.go:164] Run: docker container inspect addons-751971 --format={{.State.Status}}
	I0913 18:21:25.357148    8332 machine.go:93] provisionDockerMachine start ...
	I0913 18:21:25.357254    8332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-751971
	I0913 18:21:25.376044    8332 main.go:141] libmachine: Using SSH client type: native
	I0913 18:21:25.376300    8332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0913 18:21:25.376316    8332 main.go:141] libmachine: About to run SSH command:
	hostname
	I0913 18:21:25.378285    8332 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0913 18:21:28.525557    8332 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-751971
	
	I0913 18:21:28.525583    8332 ubuntu.go:169] provisioning hostname "addons-751971"
	I0913 18:21:28.525657    8332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-751971
	I0913 18:21:28.542365    8332 main.go:141] libmachine: Using SSH client type: native
	I0913 18:21:28.542615    8332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0913 18:21:28.542633    8332 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-751971 && echo "addons-751971" | sudo tee /etc/hostname
	I0913 18:21:28.698438    8332 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-751971
	
	I0913 18:21:28.698512    8332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-751971
	I0913 18:21:28.715100    8332 main.go:141] libmachine: Using SSH client type: native
	I0913 18:21:28.715348    8332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0913 18:21:28.715371    8332 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-751971' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-751971/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-751971' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 18:21:28.862234    8332 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 18:21:28.862291    8332 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19636-2205/.minikube CaCertPath:/home/jenkins/minikube-integration/19636-2205/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19636-2205/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19636-2205/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19636-2205/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19636-2205/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19636-2205/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19636-2205/.minikube}
	I0913 18:21:28.862334    8332 ubuntu.go:177] setting up certificates
	I0913 18:21:28.862358    8332 provision.go:84] configureAuth start
	I0913 18:21:28.862426    8332 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-751971
	I0913 18:21:28.879718    8332 provision.go:143] copyHostCerts
	I0913 18:21:28.879799    8332 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-2205/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19636-2205/.minikube/ca.pem (1078 bytes)
	I0913 18:21:28.879934    8332 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-2205/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19636-2205/.minikube/cert.pem (1123 bytes)
	I0913 18:21:28.880016    8332 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-2205/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19636-2205/.minikube/key.pem (1679 bytes)
	I0913 18:21:28.880087    8332 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19636-2205/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19636-2205/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19636-2205/.minikube/certs/ca-key.pem org=jenkins.addons-751971 san=[127.0.0.1 192.168.49.2 addons-751971 localhost minikube]
	I0913 18:21:29.201051    8332 provision.go:177] copyRemoteCerts
	I0913 18:21:29.201124    8332 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 18:21:29.201166    8332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-751971
	I0913 18:21:29.218144    8332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19636-2205/.minikube/machines/addons-751971/id_rsa Username:docker}
	I0913 18:21:29.318763    8332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-2205/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0913 18:21:29.343564    8332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-2205/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0913 18:21:29.368552    8332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-2205/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0913 18:21:29.393197    8332 provision.go:87] duration metric: took 530.810739ms to configureAuth
	I0913 18:21:29.393222    8332 ubuntu.go:193] setting minikube options for container-runtime
	I0913 18:21:29.393400    8332 config.go:182] Loaded profile config "addons-751971": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 18:21:29.393461    8332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-751971
	I0913 18:21:29.410677    8332 main.go:141] libmachine: Using SSH client type: native
	I0913 18:21:29.410973    8332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0913 18:21:29.410991    8332 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0913 18:21:29.554538    8332 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0913 18:21:29.554562    8332 ubuntu.go:71] root file system type: overlay
	I0913 18:21:29.554677    8332 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0913 18:21:29.554747    8332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-751971
	I0913 18:21:29.572272    8332 main.go:141] libmachine: Using SSH client type: native
	I0913 18:21:29.572520    8332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0913 18:21:29.572599    8332 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0913 18:21:29.734583    8332 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0913 18:21:29.734714    8332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-751971
	I0913 18:21:29.753218    8332 main.go:141] libmachine: Using SSH client type: native
	I0913 18:21:29.753458    8332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0913 18:21:29.753481    8332 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0913 18:21:30.631704    8332 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-09-06 12:06:36.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-09-13 18:21:29.728748406 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0913 18:21:30.631739    8332 machine.go:96] duration metric: took 5.274562211s to provisionDockerMachine
	I0913 18:21:30.631750    8332 client.go:171] duration metric: took 11.514700316s to LocalClient.Create
	I0913 18:21:30.631761    8332 start.go:167] duration metric: took 11.514774875s to libmachine.API.Create "addons-751971"
	I0913 18:21:30.631768    8332 start.go:293] postStartSetup for "addons-751971" (driver="docker")
	I0913 18:21:30.631779    8332 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 18:21:30.631852    8332 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 18:21:30.631898    8332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-751971
	I0913 18:21:30.651222    8332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19636-2205/.minikube/machines/addons-751971/id_rsa Username:docker}
	I0913 18:21:30.755859    8332 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 18:21:30.759613    8332 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0913 18:21:30.759652    8332 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0913 18:21:30.759664    8332 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0913 18:21:30.759671    8332 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0913 18:21:30.759708    8332 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-2205/.minikube/addons for local assets ...
	I0913 18:21:30.759801    8332 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-2205/.minikube/files for local assets ...
	I0913 18:21:30.759829    8332 start.go:296] duration metric: took 128.05474ms for postStartSetup
	I0913 18:21:30.760142    8332 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-751971
	I0913 18:21:30.779573    8332 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/addons-751971/config.json ...
	I0913 18:21:30.779866    8332 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 18:21:30.779923    8332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-751971
	I0913 18:21:30.799313    8332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19636-2205/.minikube/machines/addons-751971/id_rsa Username:docker}
	I0913 18:21:30.895071    8332 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0913 18:21:30.899948    8332 start.go:128] duration metric: took 11.78567391s to createHost
	I0913 18:21:30.899975    8332 start.go:83] releasing machines lock for "addons-751971", held for 11.785811149s
	I0913 18:21:30.900040    8332 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-751971
	I0913 18:21:30.917431    8332 ssh_runner.go:195] Run: cat /version.json
	I0913 18:21:30.917483    8332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-751971
	I0913 18:21:30.917492    8332 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 18:21:30.917564    8332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-751971
	I0913 18:21:30.934401    8332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19636-2205/.minikube/machines/addons-751971/id_rsa Username:docker}
	I0913 18:21:30.942277    8332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19636-2205/.minikube/machines/addons-751971/id_rsa Username:docker}
	I0913 18:21:31.029791    8332 ssh_runner.go:195] Run: systemctl --version
	I0913 18:21:31.168142    8332 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0913 18:21:31.173304    8332 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0913 18:21:31.200607    8332 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0913 18:21:31.200686    8332 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 18:21:31.230848    8332 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0913 18:21:31.230876    8332 start.go:495] detecting cgroup driver to use...
	I0913 18:21:31.230912    8332 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0913 18:21:31.231040    8332 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 18:21:31.248236    8332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0913 18:21:31.259033    8332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0913 18:21:31.269299    8332 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0913 18:21:31.269415    8332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0913 18:21:31.279549    8332 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0913 18:21:31.290093    8332 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0913 18:21:31.300354    8332 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0913 18:21:31.310794    8332 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 18:21:31.320587    8332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0913 18:21:31.331344    8332 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0913 18:21:31.342352    8332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0913 18:21:31.353569    8332 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 18:21:31.363098    8332 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 18:21:31.373072    8332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 18:21:31.463621    8332 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0913 18:21:31.565981    8332 start.go:495] detecting cgroup driver to use...
	I0913 18:21:31.566034    8332 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0913 18:21:31.566098    8332 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0913 18:21:31.581352    8332 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0913 18:21:31.581426    8332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0913 18:21:31.596989    8332 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 18:21:31.616602    8332 ssh_runner.go:195] Run: which cri-dockerd
	I0913 18:21:31.621381    8332 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0913 18:21:31.633665    8332 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0913 18:21:31.659629    8332 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0913 18:21:31.757943    8332 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0913 18:21:31.864213    8332 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0913 18:21:31.864382    8332 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0913 18:21:31.886140    8332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 18:21:31.989888    8332 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0913 18:21:32.277020    8332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0913 18:21:32.290159    8332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0913 18:21:32.303023    8332 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0913 18:21:32.403384    8332 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0913 18:21:32.504989    8332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 18:21:32.597045    8332 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0913 18:21:32.611282    8332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0913 18:21:32.622616    8332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 18:21:32.708362    8332 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0913 18:21:32.775441    8332 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0913 18:21:32.775625    8332 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0913 18:21:32.780177    8332 start.go:563] Will wait 60s for crictl version
	I0913 18:21:32.780290    8332 ssh_runner.go:195] Run: which crictl
	I0913 18:21:32.783913    8332 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 18:21:32.820166    8332 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0913 18:21:32.820287    8332 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0913 18:21:32.843878    8332 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0913 18:21:32.869289    8332 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0913 18:21:32.869446    8332 cli_runner.go:164] Run: docker network inspect addons-751971 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0913 18:21:32.885213    8332 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0913 18:21:32.889221    8332 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 18:21:32.900644    8332 kubeadm.go:883] updating cluster {Name:addons-751971 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-751971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0913 18:21:32.900767    8332 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 18:21:32.900832    8332 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0913 18:21:32.918441    8332 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0913 18:21:32.918500    8332 docker.go:615] Images already preloaded, skipping extraction
	I0913 18:21:32.918573    8332 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0913 18:21:32.937505    8332 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0913 18:21:32.937526    8332 cache_images.go:84] Images are preloaded, skipping loading
	I0913 18:21:32.937536    8332 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 docker true true} ...
	I0913 18:21:32.937631    8332 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-751971 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-751971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 18:21:32.937697    8332 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0913 18:21:32.985126    8332 cni.go:84] Creating CNI manager for ""
	I0913 18:21:32.985156    8332 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 18:21:32.985168    8332 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 18:21:32.985188    8332 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-751971 NodeName:addons-751971 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0913 18:21:32.985329    8332 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-751971"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 18:21:32.985402    8332 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0913 18:21:32.994806    8332 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 18:21:32.994878    8332 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0913 18:21:33.009178    8332 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0913 18:21:33.030203    8332 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 18:21:33.050278    8332 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0913 18:21:33.070372    8332 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0913 18:21:33.074011    8332 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 18:21:33.085647    8332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 18:21:33.168959    8332 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 18:21:33.185756    8332 certs.go:68] Setting up /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/addons-751971 for IP: 192.168.49.2
	I0913 18:21:33.185857    8332 certs.go:194] generating shared ca certs ...
	I0913 18:21:33.185940    8332 certs.go:226] acquiring lock for ca certs: {Name:mk77def875863f589d66bb860688a5e3d64e4959 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:21:33.186222    8332 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19636-2205/.minikube/ca.key
	I0913 18:21:33.885488    8332 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-2205/.minikube/ca.crt ...
	I0913 18:21:33.885537    8332 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-2205/.minikube/ca.crt: {Name:mk029b03441c9b6c4e97baf8f1c8539f97372cd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:21:33.885767    8332 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-2205/.minikube/ca.key ...
	I0913 18:21:33.885781    8332 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-2205/.minikube/ca.key: {Name:mkdd57a6cb13155ad70421f03e26d05a16534cd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:21:33.885874    8332 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19636-2205/.minikube/proxy-client-ca.key
	I0913 18:21:34.351072    8332 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-2205/.minikube/proxy-client-ca.crt ...
	I0913 18:21:34.351107    8332 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-2205/.minikube/proxy-client-ca.crt: {Name:mkfde417193ed5bd5b3eda90a4a9de3ae3fa626f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:21:34.351285    8332 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-2205/.minikube/proxy-client-ca.key ...
	I0913 18:21:34.351300    8332 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-2205/.minikube/proxy-client-ca.key: {Name:mk721a19f6be02978cf591fb1981dcb2b88dedea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:21:34.351381    8332 certs.go:256] generating profile certs ...
	I0913 18:21:34.351451    8332 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/addons-751971/client.key
	I0913 18:21:34.351475    8332 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/addons-751971/client.crt with IP's: []
	I0913 18:21:34.468234    8332 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/addons-751971/client.crt ...
	I0913 18:21:34.468262    8332 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/addons-751971/client.crt: {Name:mk3c986db85b0d88ff3d4ca17d73bb5ea02405bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:21:34.468428    8332 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/addons-751971/client.key ...
	I0913 18:21:34.468440    8332 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/addons-751971/client.key: {Name:mkc10a596b7de9fefbe9accf3df387c24e1fa1ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:21:34.468513    8332 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/addons-751971/apiserver.key.bd329a76
	I0913 18:21:34.468534    8332 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/addons-751971/apiserver.crt.bd329a76 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0913 18:21:34.734106    8332 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/addons-751971/apiserver.crt.bd329a76 ...
	I0913 18:21:34.734137    8332 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/addons-751971/apiserver.crt.bd329a76: {Name:mk77f50523cf9e5cb99c9c2825cae904daef1220 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:21:34.734310    8332 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/addons-751971/apiserver.key.bd329a76 ...
	I0913 18:21:34.734325    8332 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/addons-751971/apiserver.key.bd329a76: {Name:mk54d81da8b51edccb7283afd6a056890ee0006a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:21:34.734435    8332 certs.go:381] copying /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/addons-751971/apiserver.crt.bd329a76 -> /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/addons-751971/apiserver.crt
	I0913 18:21:34.734523    8332 certs.go:385] copying /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/addons-751971/apiserver.key.bd329a76 -> /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/addons-751971/apiserver.key
	I0913 18:21:34.734586    8332 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/addons-751971/proxy-client.key
	I0913 18:21:34.734605    8332 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/addons-751971/proxy-client.crt with IP's: []
	I0913 18:21:35.180957    8332 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/addons-751971/proxy-client.crt ...
	I0913 18:21:35.180992    8332 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/addons-751971/proxy-client.crt: {Name:mk42731e2a5ccf40007b149201b6a72c2f861478 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:21:35.181185    8332 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/addons-751971/proxy-client.key ...
	I0913 18:21:35.181199    8332 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/addons-751971/proxy-client.key: {Name:mkd4cd0439c6bb3605701ebf84882a3975f3143e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:21:35.181391    8332 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-2205/.minikube/certs/ca-key.pem (1679 bytes)
	I0913 18:21:35.181435    8332 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-2205/.minikube/certs/ca.pem (1078 bytes)
	I0913 18:21:35.181466    8332 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-2205/.minikube/certs/cert.pem (1123 bytes)
	I0913 18:21:35.181495    8332 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-2205/.minikube/certs/key.pem (1679 bytes)
	I0913 18:21:35.182124    8332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-2205/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 18:21:35.208263    8332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-2205/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0913 18:21:35.232956    8332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-2205/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 18:21:35.257312    8332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-2205/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0913 18:21:35.280926    8332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/addons-751971/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0913 18:21:35.305473    8332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/addons-751971/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0913 18:21:35.330314    8332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/addons-751971/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 18:21:35.354774    8332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/addons-751971/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0913 18:21:35.379946    8332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19636-2205/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 18:21:35.404140    8332 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 18:21:35.421920    8332 ssh_runner.go:195] Run: openssl version
	I0913 18:21:35.427407    8332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 18:21:35.437222    8332 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 18:21:35.440751    8332 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 18:21 /usr/share/ca-certificates/minikubeCA.pem
	I0913 18:21:35.440861    8332 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 18:21:35.447743    8332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 18:21:35.456896    8332 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 18:21:35.460137    8332 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0913 18:21:35.460192    8332 kubeadm.go:392] StartCluster: {Name:addons-751971 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-751971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 18:21:35.460323    8332 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0913 18:21:35.475820    8332 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0913 18:21:35.484516    8332 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 18:21:35.493057    8332 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0913 18:21:35.493151    8332 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 18:21:35.501822    8332 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 18:21:35.501850    8332 kubeadm.go:157] found existing configuration files:
	
	I0913 18:21:35.501901    8332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0913 18:21:35.511084    8332 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 18:21:35.511185    8332 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 18:21:35.519735    8332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0913 18:21:35.528363    8332 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 18:21:35.528472    8332 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 18:21:35.536770    8332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0913 18:21:35.545608    8332 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 18:21:35.545700    8332 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 18:21:35.554474    8332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0913 18:21:35.563483    8332 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 18:21:35.563548    8332 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 18:21:35.573013    8332 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0913 18:21:35.628795    8332 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0913 18:21:35.628992    8332 kubeadm.go:310] [preflight] Running pre-flight checks
	I0913 18:21:35.655592    8332 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0913 18:21:35.655802    8332 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1069-aws
	I0913 18:21:35.655870    8332 kubeadm.go:310] OS: Linux
	I0913 18:21:35.655953    8332 kubeadm.go:310] CGROUPS_CPU: enabled
	I0913 18:21:35.656027    8332 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0913 18:21:35.656106    8332 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0913 18:21:35.656179    8332 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0913 18:21:35.656260    8332 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0913 18:21:35.656336    8332 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0913 18:21:35.656411    8332 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0913 18:21:35.656485    8332 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0913 18:21:35.656564    8332 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0913 18:21:35.723421    8332 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0913 18:21:35.723570    8332 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0913 18:21:35.723693    8332 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0913 18:21:35.742535    8332 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0913 18:21:35.746809    8332 out.go:235]   - Generating certificates and keys ...
	I0913 18:21:35.746999    8332 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0913 18:21:35.747117    8332 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0913 18:21:36.565517    8332 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0913 18:21:36.826815    8332 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0913 18:21:37.886645    8332 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0913 18:21:38.187874    8332 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0913 18:21:38.572380    8332 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0913 18:21:38.572705    8332 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-751971 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0913 18:21:39.453155    8332 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0913 18:21:39.453320    8332 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-751971 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0913 18:21:39.851005    8332 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0913 18:21:41.178476    8332 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0913 18:21:41.738757    8332 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0913 18:21:41.739052    8332 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0913 18:21:41.895294    8332 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0913 18:21:42.367083    8332 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0913 18:21:42.705803    8332 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0913 18:21:42.931519    8332 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0913 18:21:43.253349    8332 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0913 18:21:43.254221    8332 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0913 18:21:43.257651    8332 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0913 18:21:43.259967    8332 out.go:235]   - Booting up control plane ...
	I0913 18:21:43.260070    8332 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0913 18:21:43.260153    8332 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0913 18:21:43.261685    8332 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0913 18:21:43.274578    8332 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0913 18:21:43.281072    8332 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0913 18:21:43.281133    8332 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0913 18:21:43.386520    8332 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0913 18:21:43.386646    8332 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0913 18:21:45.385754    8332 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 2.00174172s
	I0913 18:21:45.385858    8332 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0913 18:21:51.388029    8332 kubeadm.go:310] [api-check] The API server is healthy after 6.002196589s
	I0913 18:21:51.420676    8332 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0913 18:21:51.460499    8332 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0913 18:21:51.494154    8332 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0913 18:21:51.494349    8332 kubeadm.go:310] [mark-control-plane] Marking the node addons-751971 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0913 18:21:51.505589    8332 kubeadm.go:310] [bootstrap-token] Using token: hiiv0e.1yh59j434ok3btot
	I0913 18:21:51.507399    8332 out.go:235]   - Configuring RBAC rules ...
	I0913 18:21:51.507525    8332 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0913 18:21:51.514680    8332 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0913 18:21:51.529060    8332 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0913 18:21:51.535132    8332 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0913 18:21:51.539999    8332 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0913 18:21:51.546242    8332 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0913 18:21:51.795648    8332 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0913 18:21:52.221077    8332 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0913 18:21:52.795452    8332 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0913 18:21:52.796798    8332 kubeadm.go:310] 
	I0913 18:21:52.796874    8332 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0913 18:21:52.796881    8332 kubeadm.go:310] 
	I0913 18:21:52.796966    8332 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0913 18:21:52.796974    8332 kubeadm.go:310] 
	I0913 18:21:52.796999    8332 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0913 18:21:52.797064    8332 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0913 18:21:52.797121    8332 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0913 18:21:52.797126    8332 kubeadm.go:310] 
	I0913 18:21:52.797183    8332 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0913 18:21:52.797192    8332 kubeadm.go:310] 
	I0913 18:21:52.797239    8332 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0913 18:21:52.797247    8332 kubeadm.go:310] 
	I0913 18:21:52.797299    8332 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0913 18:21:52.797375    8332 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0913 18:21:52.797446    8332 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0913 18:21:52.797455    8332 kubeadm.go:310] 
	I0913 18:21:52.797751    8332 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0913 18:21:52.797836    8332 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0913 18:21:52.797842    8332 kubeadm.go:310] 
	I0913 18:21:52.797928    8332 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token hiiv0e.1yh59j434ok3btot \
	I0913 18:21:52.798029    8332 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:32b09e3b948a2863c298f6c31d5b37453d072760f0797bfd4972c099fc1841d6 \
	I0913 18:21:52.798091    8332 kubeadm.go:310] 	--control-plane 
	I0913 18:21:52.798098    8332 kubeadm.go:310] 
	I0913 18:21:52.798182    8332 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0913 18:21:52.798187    8332 kubeadm.go:310] 
	I0913 18:21:52.798268    8332 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token hiiv0e.1yh59j434ok3btot \
	I0913 18:21:52.798370    8332 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:32b09e3b948a2863c298f6c31d5b37453d072760f0797bfd4972c099fc1841d6 
	I0913 18:21:52.801481    8332 kubeadm.go:310] W0913 18:21:35.625308    1802 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0913 18:21:52.801780    8332 kubeadm.go:310] W0913 18:21:35.626316    1802 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0913 18:21:52.801999    8332 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1069-aws\n", err: exit status 1
	I0913 18:21:52.802130    8332 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0913 18:21:52.802156    8332 cni.go:84] Creating CNI manager for ""
	I0913 18:21:52.802176    8332 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 18:21:52.804452    8332 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0913 18:21:52.806435    8332 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0913 18:21:52.815802    8332 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0913 18:21:52.837210    8332 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0913 18:21:52.837304    8332 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:21:52.837345    8332 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-751971 minikube.k8s.io/updated_at=2024_09_13T18_21_52_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92 minikube.k8s.io/name=addons-751971 minikube.k8s.io/primary=true
	I0913 18:21:52.854010    8332 ops.go:34] apiserver oom_adj: -16
	I0913 18:21:52.949115    8332 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:21:53.449187    8332 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:21:53.950025    8332 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:21:54.449206    8332 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:21:54.949206    8332 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:21:55.449470    8332 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:21:55.949657    8332 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:21:56.449787    8332 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:21:56.949911    8332 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:21:57.068219    8332 kubeadm.go:1113] duration metric: took 4.230984287s to wait for elevateKubeSystemPrivileges
	I0913 18:21:57.068253    8332 kubeadm.go:394] duration metric: took 21.608065485s to StartCluster
	I0913 18:21:57.068270    8332 settings.go:142] acquiring lock: {Name:mk58c00f8d999d86ff8ffd061d98c3c193cf57c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:21:57.068385    8332 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19636-2205/kubeconfig
	I0913 18:21:57.068785    8332 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-2205/kubeconfig: {Name:mk13f5bdf0c8a77b0a1a38db142977dae63f6d41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:21:57.068980    8332 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 18:21:57.069094    8332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0913 18:21:57.069346    8332 config.go:182] Loaded profile config "addons-751971": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 18:21:57.069386    8332 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0913 18:21:57.069465    8332 addons.go:69] Setting yakd=true in profile "addons-751971"
	I0913 18:21:57.069484    8332 addons.go:234] Setting addon yakd=true in "addons-751971"
	I0913 18:21:57.069509    8332 host.go:66] Checking if "addons-751971" exists ...
	I0913 18:21:57.070017    8332 cli_runner.go:164] Run: docker container inspect addons-751971 --format={{.State.Status}}
	I0913 18:21:57.070528    8332 addons.go:69] Setting inspektor-gadget=true in profile "addons-751971"
	I0913 18:21:57.070552    8332 addons.go:234] Setting addon inspektor-gadget=true in "addons-751971"
	I0913 18:21:57.070578    8332 host.go:66] Checking if "addons-751971" exists ...
	I0913 18:21:57.070757    8332 addons.go:69] Setting metrics-server=true in profile "addons-751971"
	I0913 18:21:57.070773    8332 addons.go:234] Setting addon metrics-server=true in "addons-751971"
	I0913 18:21:57.070794    8332 host.go:66] Checking if "addons-751971" exists ...
	I0913 18:21:57.071048    8332 cli_runner.go:164] Run: docker container inspect addons-751971 --format={{.State.Status}}
	I0913 18:21:57.071189    8332 cli_runner.go:164] Run: docker container inspect addons-751971 --format={{.State.Status}}
	I0913 18:21:57.073636    8332 addons.go:69] Setting cloud-spanner=true in profile "addons-751971"
	I0913 18:21:57.073670    8332 addons.go:234] Setting addon cloud-spanner=true in "addons-751971"
	I0913 18:21:57.073701    8332 host.go:66] Checking if "addons-751971" exists ...
	I0913 18:21:57.074236    8332 cli_runner.go:164] Run: docker container inspect addons-751971 --format={{.State.Status}}
	I0913 18:21:57.074821    8332 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-751971"
	I0913 18:21:57.074846    8332 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-751971"
	I0913 18:21:57.075052    8332 host.go:66] Checking if "addons-751971" exists ...
	I0913 18:21:57.076180    8332 addons.go:69] Setting registry=true in profile "addons-751971"
	I0913 18:21:57.076200    8332 addons.go:234] Setting addon registry=true in "addons-751971"
	I0913 18:21:57.076224    8332 host.go:66] Checking if "addons-751971" exists ...
	I0913 18:21:57.076636    8332 cli_runner.go:164] Run: docker container inspect addons-751971 --format={{.State.Status}}
	I0913 18:21:57.081193    8332 addons.go:69] Setting storage-provisioner=true in profile "addons-751971"
	I0913 18:21:57.081222    8332 addons.go:234] Setting addon storage-provisioner=true in "addons-751971"
	I0913 18:21:57.081286    8332 host.go:66] Checking if "addons-751971" exists ...
	I0913 18:21:57.082003    8332 cli_runner.go:164] Run: docker container inspect addons-751971 --format={{.State.Status}}
	I0913 18:21:57.083464    8332 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-751971"
	I0913 18:21:57.083547    8332 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-751971"
	I0913 18:21:57.083606    8332 host.go:66] Checking if "addons-751971" exists ...
	I0913 18:21:57.084344    8332 cli_runner.go:164] Run: docker container inspect addons-751971 --format={{.State.Status}}
	I0913 18:21:57.093722    8332 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-751971"
	I0913 18:21:57.093769    8332 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-751971"
	I0913 18:21:57.094155    8332 cli_runner.go:164] Run: docker container inspect addons-751971 --format={{.State.Status}}
	I0913 18:21:57.097488    8332 addons.go:69] Setting default-storageclass=true in profile "addons-751971"
	I0913 18:21:57.097526    8332 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-751971"
	I0913 18:21:57.097848    8332 cli_runner.go:164] Run: docker container inspect addons-751971 --format={{.State.Status}}
	I0913 18:21:57.110157    8332 addons.go:69] Setting gcp-auth=true in profile "addons-751971"
	I0913 18:21:57.110208    8332 mustload.go:65] Loading cluster: addons-751971
	I0913 18:21:57.110813    8332 config.go:182] Loaded profile config "addons-751971": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 18:21:57.111180    8332 cli_runner.go:164] Run: docker container inspect addons-751971 --format={{.State.Status}}
	I0913 18:21:57.112195    8332 addons.go:69] Setting volcano=true in profile "addons-751971"
	I0913 18:21:57.112225    8332 addons.go:234] Setting addon volcano=true in "addons-751971"
	I0913 18:21:57.112262    8332 host.go:66] Checking if "addons-751971" exists ...
	I0913 18:21:57.112717    8332 cli_runner.go:164] Run: docker container inspect addons-751971 --format={{.State.Status}}
	I0913 18:21:57.129536    8332 addons.go:69] Setting ingress=true in profile "addons-751971"
	I0913 18:21:57.129676    8332 addons.go:234] Setting addon ingress=true in "addons-751971"
	I0913 18:21:57.129792    8332 out.go:177] * Verifying Kubernetes components...
	I0913 18:21:57.216020    8332 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0913 18:21:57.216288    8332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 18:21:57.129822    8332 host.go:66] Checking if "addons-751971" exists ...
	I0913 18:21:57.216994    8332 cli_runner.go:164] Run: docker container inspect addons-751971 --format={{.State.Status}}
	I0913 18:21:57.129586    8332 addons.go:69] Setting volumesnapshots=true in profile "addons-751971"
	I0913 18:21:57.235999    8332 addons.go:234] Setting addon volumesnapshots=true in "addons-751971"
	I0913 18:21:57.244572    8332 host.go:66] Checking if "addons-751971" exists ...
	I0913 18:21:57.245300    8332 cli_runner.go:164] Run: docker container inspect addons-751971 --format={{.State.Status}}
	I0913 18:21:57.241973    8332 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-751971"
	I0913 18:21:57.200342    8332 cli_runner.go:164] Run: docker container inspect addons-751971 --format={{.State.Status}}
	I0913 18:21:57.231166    8332 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0913 18:21:57.129829    8332 addons.go:69] Setting ingress-dns=true in profile "addons-751971"
	I0913 18:21:57.247203    8332 addons.go:234] Setting addon ingress-dns=true in "addons-751971"
	I0913 18:21:57.247307    8332 host.go:66] Checking if "addons-751971" exists ...
	I0913 18:21:57.247905    8332 cli_runner.go:164] Run: docker container inspect addons-751971 --format={{.State.Status}}
	I0913 18:21:57.266181    8332 host.go:66] Checking if "addons-751971" exists ...
	I0913 18:21:57.266709    8332 cli_runner.go:164] Run: docker container inspect addons-751971 --format={{.State.Status}}
	I0913 18:21:57.269090    8332 out.go:177]   - Using image docker.io/registry:2.8.3
	I0913 18:21:57.270840    8332 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0913 18:21:57.272754    8332 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0913 18:21:57.272772    8332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0913 18:21:57.272831    8332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-751971
	I0913 18:21:57.288722    8332 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0913 18:21:57.290626    8332 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0913 18:21:57.292577    8332 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0913 18:21:57.298074    8332 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0913 18:21:57.318332    8332 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0913 18:21:57.321799    8332 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0913 18:21:57.321857    8332 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0913 18:21:57.321950    8332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-751971
	I0913 18:21:57.330322    8332 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0913 18:21:57.332082    8332 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0913 18:21:57.333919    8332 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0913 18:21:57.337967    8332 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0913 18:21:57.337993    8332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0913 18:21:57.342193    8332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-751971
	I0913 18:21:57.349039    8332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0913 18:21:57.349161    8332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-751971
	I0913 18:21:57.362280    8332 host.go:66] Checking if "addons-751971" exists ...
	I0913 18:21:57.363896    8332 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0913 18:21:57.366241    8332 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0913 18:21:57.370089    8332 addons.go:234] Setting addon default-storageclass=true in "addons-751971"
	I0913 18:21:57.373256    8332 host.go:66] Checking if "addons-751971" exists ...
	I0913 18:21:57.373704    8332 cli_runner.go:164] Run: docker container inspect addons-751971 --format={{.State.Status}}
	I0913 18:21:57.389749    8332 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 18:21:57.397557    8332 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0913 18:21:57.401890    8332 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0913 18:21:57.401918    8332 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0913 18:21:57.402007    8332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-751971
	I0913 18:21:57.402196    8332 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0913 18:21:57.402206    8332 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0913 18:21:57.402242    8332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-751971
	I0913 18:21:57.414340    8332 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 18:21:57.414359    8332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0913 18:21:57.414421    8332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-751971
	I0913 18:21:57.421984    8332 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0913 18:21:57.446254    8332 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0913 18:21:57.455002    8332 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0913 18:21:57.457494    8332 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0913 18:21:57.463335    8332 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0913 18:21:57.463567    8332 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0913 18:21:57.463732    8332 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0913 18:21:57.464001    8332 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0913 18:21:57.477962    8332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0913 18:21:57.478119    8332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-751971
	I0913 18:21:57.485791    8332 out.go:177]   - Using image docker.io/busybox:stable
	I0913 18:21:57.477802    8332 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0913 18:21:57.489794    8332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0913 18:21:57.489876    8332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-751971
	I0913 18:21:57.494166    8332 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0913 18:21:57.494422    8332 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0913 18:21:57.494743    8332 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0913 18:21:57.494759    8332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0913 18:21:57.494835    8332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-751971
	I0913 18:21:57.477940    8332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19636-2205/.minikube/machines/addons-751971/id_rsa Username:docker}
	I0913 18:21:57.510018    8332 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0913 18:21:57.510147    8332 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0913 18:21:57.514217    8332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-751971
	I0913 18:21:57.534606    8332 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0913 18:21:57.534628    8332 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0913 18:21:57.534703    8332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-751971
	I0913 18:21:57.541194    8332 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0913 18:21:57.543853    8332 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0913 18:21:57.543875    8332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0913 18:21:57.543940    8332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-751971
	I0913 18:21:57.570169    8332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19636-2205/.minikube/machines/addons-751971/id_rsa Username:docker}
	I0913 18:21:57.571388    8332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19636-2205/.minikube/machines/addons-751971/id_rsa Username:docker}
	I0913 18:21:57.572000    8332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19636-2205/.minikube/machines/addons-751971/id_rsa Username:docker}
	I0913 18:21:57.606026    8332 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0913 18:21:57.606156    8332 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0913 18:21:57.606238    8332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-751971
	I0913 18:21:57.609050    8332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19636-2205/.minikube/machines/addons-751971/id_rsa Username:docker}
	I0913 18:21:57.614232    8332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19636-2205/.minikube/machines/addons-751971/id_rsa Username:docker}
	I0913 18:21:57.631321    8332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19636-2205/.minikube/machines/addons-751971/id_rsa Username:docker}
	I0913 18:21:57.702250    8332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19636-2205/.minikube/machines/addons-751971/id_rsa Username:docker}
	I0913 18:21:57.702616    8332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19636-2205/.minikube/machines/addons-751971/id_rsa Username:docker}
	I0913 18:21:57.714444    8332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19636-2205/.minikube/machines/addons-751971/id_rsa Username:docker}
	I0913 18:21:57.716898    8332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19636-2205/.minikube/machines/addons-751971/id_rsa Username:docker}
	I0913 18:21:57.723219    8332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19636-2205/.minikube/machines/addons-751971/id_rsa Username:docker}
	I0913 18:21:57.723895    8332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19636-2205/.minikube/machines/addons-751971/id_rsa Username:docker}
	I0913 18:21:57.727947    8332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19636-2205/.minikube/machines/addons-751971/id_rsa Username:docker}
	I0913 18:21:57.870259    8332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0913 18:21:57.870433    8332 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 18:21:58.290936    8332 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0913 18:21:58.290961    8332 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0913 18:21:58.531761    8332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 18:21:58.540603    8332 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0913 18:21:58.540681    8332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0913 18:21:58.616076    8332 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0913 18:21:58.616102    8332 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0913 18:21:58.630994    8332 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0913 18:21:58.631021    8332 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0913 18:21:58.820224    8332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0913 18:21:58.827652    8332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0913 18:21:58.833514    8332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0913 18:21:58.885155    8332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0913 18:21:58.934868    8332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0913 18:21:58.943093    8332 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0913 18:21:58.943118    8332 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0913 18:21:58.949876    8332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0913 18:21:58.955937    8332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0913 18:21:58.975902    8332 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0913 18:21:58.975926    8332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0913 18:21:59.047644    8332 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0913 18:21:59.047689    8332 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0913 18:21:59.052415    8332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0913 18:21:59.096381    8332 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0913 18:21:59.096413    8332 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0913 18:21:59.101068    8332 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0913 18:21:59.101094    8332 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0913 18:21:59.103789    8332 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0913 18:21:59.103821    8332 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0913 18:21:59.125115    8332 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 18:21:59.125139    8332 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0913 18:21:59.233517    8332 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0913 18:21:59.233537    8332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0913 18:21:59.259773    8332 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0913 18:21:59.259800    8332 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0913 18:21:59.272494    8332 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0913 18:21:59.272519    8332 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0913 18:21:59.276077    8332 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0913 18:21:59.276103    8332 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0913 18:21:59.336055    8332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 18:21:59.398829    8332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0913 18:21:59.489774    8332 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0913 18:21:59.489801    8332 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0913 18:21:59.516172    8332 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0913 18:21:59.516207    8332 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0913 18:21:59.529689    8332 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0913 18:21:59.529715    8332 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0913 18:21:59.819268    8332 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0913 18:21:59.819294    8332 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0913 18:21:59.834085    8332 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.963606255s)
	I0913 18:21:59.834952    8332 node_ready.go:35] waiting up to 6m0s for node "addons-751971" to be "Ready" ...
	I0913 18:21:59.835207    8332 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.964870196s)
	I0913 18:21:59.835225    8332 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0913 18:21:59.840425    8332 node_ready.go:49] node "addons-751971" has status "Ready":"True"
	I0913 18:21:59.840456    8332 node_ready.go:38] duration metric: took 5.471438ms for node "addons-751971" to be "Ready" ...
	I0913 18:21:59.840468    8332 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 18:21:59.856453    8332 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bmcs5" in "kube-system" namespace to be "Ready" ...
	I0913 18:21:59.869815    8332 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0913 18:21:59.869843    8332 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0913 18:21:59.875480    8332 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0913 18:21:59.875504    8332 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0913 18:22:00.216591    8332 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0913 18:22:00.216620    8332 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0913 18:22:00.229397    8332 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0913 18:22:00.229422    8332 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0913 18:22:00.340119    8332 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-751971" context rescaled to 1 replicas
	I0913 18:22:00.355907    8332 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0913 18:22:00.356873    8332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0913 18:22:00.479592    8332 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0913 18:22:00.479625    8332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0913 18:22:00.579114    8332 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0913 18:22:00.579145    8332 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0913 18:22:00.667739    8332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0913 18:22:00.805355    8332 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0913 18:22:00.805456    8332 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0913 18:22:00.906858    8332 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0913 18:22:00.906929    8332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0913 18:22:01.023496    8332 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0913 18:22:01.023566    8332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0913 18:22:01.363039    8332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0913 18:22:01.695255    8332 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0913 18:22:01.695336    8332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0913 18:22:01.883799    8332 pod_ready.go:103] pod "coredns-7c65d6cfc9-bmcs5" in "kube-system" namespace has status "Ready":"False"
	I0913 18:22:02.197918    8332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.666064761s)
	I0913 18:22:02.487835    8332 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0913 18:22:02.487941    8332 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0913 18:22:03.271629    8332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0913 18:22:04.371766    8332 pod_ready.go:103] pod "coredns-7c65d6cfc9-bmcs5" in "kube-system" namespace has status "Ready":"False"
	I0913 18:22:04.385264    8332 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0913 18:22:04.385413    8332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-751971
	I0913 18:22:04.411835    8332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19636-2205/.minikube/machines/addons-751971/id_rsa Username:docker}
	I0913 18:22:05.530561    8332 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0913 18:22:05.772245    8332 addons.go:234] Setting addon gcp-auth=true in "addons-751971"
	I0913 18:22:05.772368    8332 host.go:66] Checking if "addons-751971" exists ...
	I0913 18:22:05.772930    8332 cli_runner.go:164] Run: docker container inspect addons-751971 --format={{.State.Status}}
	I0913 18:22:05.801515    8332 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0913 18:22:05.801570    8332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-751971
	I0913 18:22:05.832199    8332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19636-2205/.minikube/machines/addons-751971/id_rsa Username:docker}
	I0913 18:22:06.863696    8332 pod_ready.go:103] pod "coredns-7c65d6cfc9-bmcs5" in "kube-system" namespace has status "Ready":"False"
	I0913 18:22:07.642231    8332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.821970295s)
	I0913 18:22:07.642277    8332 addons.go:475] Verifying addon ingress=true in "addons-751971"
	I0913 18:22:07.642439    8332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.814760183s)
	I0913 18:22:07.642494    8332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.808957089s)
	I0913 18:22:07.644174    8332 out.go:177] * Verifying ingress addon...
	I0913 18:22:07.649315    8332 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0913 18:22:07.658142    8332 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0913 18:22:07.658171    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:08.154320    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:08.756335    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:08.870137    8332 pod_ready.go:103] pod "coredns-7c65d6cfc9-bmcs5" in "kube-system" namespace has status "Ready":"False"
	I0913 18:22:09.213441    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:09.678649    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:10.103341    8332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (11.218148854s)
	I0913 18:22:10.103571    8332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (11.051130492s)
	I0913 18:22:10.103439    8332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (11.168543931s)
	I0913 18:22:10.103598    8332 addons.go:475] Verifying addon registry=true in "addons-751971"
	I0913 18:22:10.103489    8332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (11.153574138s)
	I0913 18:22:10.103531    8332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (11.14756273s)
	I0913 18:22:10.103844    8332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.767757265s)
	I0913 18:22:10.103865    8332 addons.go:475] Verifying addon metrics-server=true in "addons-751971"
	I0913 18:22:10.103907    8332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (10.705049878s)
	I0913 18:22:10.104232    8332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (9.436397583s)
	W0913 18:22:10.104276    8332 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0913 18:22:10.104312    8332 retry.go:31] will retry after 171.606442ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0913 18:22:10.104387    8332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (8.741321429s)
	I0913 18:22:10.106933    8332 out.go:177] * Verifying registry addon...
	I0913 18:22:10.106933    8332 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-751971 service yakd-dashboard -n yakd-dashboard
	
	I0913 18:22:10.109935    8332 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0913 18:22:10.150688    8332 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0913 18:22:10.150719    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:10.255766    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:10.276854    8332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0913 18:22:10.497265    8332 pod_ready.go:98] pod "coredns-7c65d6cfc9-bmcs5" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-13 18:22:09 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-13 18:21:57 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-13 18:21:57 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-13 18:21:57 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-13 18:21:57 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.2 HostIPs:[{IP:192.168.49.2
}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-13 18:21:57 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-13 18:21:58 +0000 UTC,FinishedAt:2024-09-13 18:22:08 +0000 UTC,ContainerID:docker://5a1d716cb7c952eb4bd04590f920d74d5d44dcf537e21c427b077c1558d65ea3,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://5a1d716cb7c952eb4bd04590f920d74d5d44dcf537e21c427b077c1558d65ea3 Started:0x4001ab05e0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0x4001c988c0} {Name:kube-api-access-qtd9z MountPath:/var/run/secrets/kubernetes.io/serviceaccount
ReadOnly:true RecursiveReadOnly:0x4001c988d0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0913 18:22:10.497309    8332 pod_ready.go:82] duration metric: took 10.640817072s for pod "coredns-7c65d6cfc9-bmcs5" in "kube-system" namespace to be "Ready" ...
	E0913 18:22:10.497322    8332 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-bmcs5" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-13 18:22:09 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-13 18:21:57 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-13 18:21:57 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-13 18:21:57 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-13 18:21:57 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.4
9.2 HostIPs:[{IP:192.168.49.2}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-13 18:21:57 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-13 18:21:58 +0000 UTC,FinishedAt:2024-09-13 18:22:08 +0000 UTC,ContainerID:docker://5a1d716cb7c952eb4bd04590f920d74d5d44dcf537e21c427b077c1558d65ea3,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://5a1d716cb7c952eb4bd04590f920d74d5d44dcf537e21c427b077c1558d65ea3 Started:0x4001ab05e0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0x4001c988c0} {Name:kube-api-access-qtd9z MountPath:/var/run/secrets
/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0x4001c988d0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0913 18:22:10.497332    8332 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-f7rlh" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:10.631809    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:10.732980    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:11.112082    8332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.840357604s)
	I0913 18:22:11.112162    8332 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-751971"
	I0913 18:22:11.112410    8332 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.310871577s)
	I0913 18:22:11.115897    8332 out.go:177] * Verifying csi-hostpath-driver addon...
	I0913 18:22:11.116061    8332 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0913 18:22:11.119813    8332 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0913 18:22:11.122199    8332 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0913 18:22:11.124132    8332 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0913 18:22:11.124209    8332 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0913 18:22:11.138144    8332 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0913 18:22:11.138219    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:11.138854    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:11.166586    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:11.247983    8332 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0913 18:22:11.248052    8332 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0913 18:22:11.315847    8332 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0913 18:22:11.315912    8332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0913 18:22:11.400433    8332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0913 18:22:11.636429    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:11.637870    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:11.730212    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:12.113884    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:12.125011    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:12.154635    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:12.476276    8332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.199339523s)
	I0913 18:22:12.503769    8332 pod_ready.go:103] pod "coredns-7c65d6cfc9-f7rlh" in "kube-system" namespace has status "Ready":"False"
	I0913 18:22:12.615262    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:12.624911    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:12.719886    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:12.876076    8332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.475539516s)
	I0913 18:22:12.879071    8332 addons.go:475] Verifying addon gcp-auth=true in "addons-751971"
	I0913 18:22:12.881362    8332 out.go:177] * Verifying gcp-auth addon...
	I0913 18:22:12.884056    8332 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0913 18:22:12.901428    8332 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0913 18:22:13.115063    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:13.125986    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:13.155010    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:13.614808    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:13.624942    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:13.654782    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:14.114415    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:14.125238    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:14.155803    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:14.614909    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:14.625249    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:14.653698    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:15.034497    8332 pod_ready.go:103] pod "coredns-7c65d6cfc9-f7rlh" in "kube-system" namespace has status "Ready":"False"
	I0913 18:22:15.121698    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:15.133407    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:15.154225    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:15.619798    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:15.629128    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:15.653800    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:16.114211    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:16.124986    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:16.154064    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:16.613819    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:16.624209    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:16.654582    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:17.114276    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:17.124816    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:17.154169    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:17.503800    8332 pod_ready.go:103] pod "coredns-7c65d6cfc9-f7rlh" in "kube-system" namespace has status "Ready":"False"
	I0913 18:22:17.613243    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:17.624801    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:17.653658    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:18.115738    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:18.128278    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:18.154121    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:18.613964    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:18.624680    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:18.653868    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:19.115117    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:19.125505    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:19.215789    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:19.512577    8332 pod_ready.go:103] pod "coredns-7c65d6cfc9-f7rlh" in "kube-system" namespace has status "Ready":"False"
	I0913 18:22:19.614689    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:19.625160    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:19.653729    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:20.115651    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:20.126189    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:20.154789    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:20.616587    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:20.625960    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:20.654526    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:21.115179    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:21.125922    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:21.154120    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:21.614689    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:21.625345    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:21.654631    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:22.009536    8332 pod_ready.go:103] pod "coredns-7c65d6cfc9-f7rlh" in "kube-system" namespace has status "Ready":"False"
	I0913 18:22:22.114389    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:22.125600    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:22.153944    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:22.613669    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:22.625677    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:22.653686    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:23.114286    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:23.124686    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:23.215176    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:23.613926    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:23.624771    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:23.653358    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:24.014222    8332 pod_ready.go:103] pod "coredns-7c65d6cfc9-f7rlh" in "kube-system" namespace has status "Ready":"False"
	I0913 18:22:24.119481    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:24.130964    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:24.155098    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:24.615977    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:24.633503    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:24.715503    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:25.115090    8332 kapi.go:107] duration metric: took 15.005152344s to wait for kubernetes.io/minikube-addons=registry ...
	I0913 18:22:25.124789    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:25.154164    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:25.625802    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:25.654921    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:26.124912    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:26.154360    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:26.507261    8332 pod_ready.go:103] pod "coredns-7c65d6cfc9-f7rlh" in "kube-system" namespace has status "Ready":"False"
	I0913 18:22:26.648761    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:26.686132    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:27.125915    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:27.153606    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:27.627410    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:27.653719    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:28.125860    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:28.153930    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:28.625430    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:28.654075    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:29.003980    8332 pod_ready.go:103] pod "coredns-7c65d6cfc9-f7rlh" in "kube-system" namespace has status "Ready":"False"
	I0913 18:22:29.124897    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:29.154035    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:29.625045    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:29.653836    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:30.128041    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:30.162575    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:30.625059    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:30.654147    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:31.006491    8332 pod_ready.go:103] pod "coredns-7c65d6cfc9-f7rlh" in "kube-system" namespace has status "Ready":"False"
	I0913 18:22:31.126197    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:31.154394    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:31.626004    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:31.654689    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:32.124882    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:32.154589    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:32.625493    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:32.656936    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:33.007645    8332 pod_ready.go:93] pod "coredns-7c65d6cfc9-f7rlh" in "kube-system" namespace has status "Ready":"True"
	I0913 18:22:33.007731    8332 pod_ready.go:82] duration metric: took 22.510384585s for pod "coredns-7c65d6cfc9-f7rlh" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:33.007757    8332 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-751971" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:33.016428    8332 pod_ready.go:93] pod "etcd-addons-751971" in "kube-system" namespace has status "Ready":"True"
	I0913 18:22:33.016503    8332 pod_ready.go:82] duration metric: took 8.724848ms for pod "etcd-addons-751971" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:33.016527    8332 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-751971" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:33.025816    8332 pod_ready.go:93] pod "kube-apiserver-addons-751971" in "kube-system" namespace has status "Ready":"True"
	I0913 18:22:33.025890    8332 pod_ready.go:82] duration metric: took 9.337859ms for pod "kube-apiserver-addons-751971" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:33.025916    8332 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-751971" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:33.035163    8332 pod_ready.go:93] pod "kube-controller-manager-addons-751971" in "kube-system" namespace has status "Ready":"True"
	I0913 18:22:33.035221    8332 pod_ready.go:82] duration metric: took 9.279668ms for pod "kube-controller-manager-addons-751971" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:33.035262    8332 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xk8dq" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:33.042391    8332 pod_ready.go:93] pod "kube-proxy-xk8dq" in "kube-system" namespace has status "Ready":"True"
	I0913 18:22:33.042478    8332 pod_ready.go:82] duration metric: took 7.184392ms for pod "kube-proxy-xk8dq" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:33.042513    8332 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-751971" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:33.126453    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:33.155409    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:33.402836    8332 pod_ready.go:93] pod "kube-scheduler-addons-751971" in "kube-system" namespace has status "Ready":"True"
	I0913 18:22:33.402898    8332 pod_ready.go:82] duration metric: took 360.346692ms for pod "kube-scheduler-addons-751971" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:33.402931    8332 pod_ready.go:39] duration metric: took 33.562422778s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 18:22:33.402982    8332 api_server.go:52] waiting for apiserver process to appear ...
	I0913 18:22:33.403082    8332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 18:22:33.439479    8332 api_server.go:72] duration metric: took 36.370462605s to wait for apiserver process to appear ...
	I0913 18:22:33.439554    8332 api_server.go:88] waiting for apiserver healthz status ...
	I0913 18:22:33.439592    8332 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0913 18:22:33.448891    8332 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0913 18:22:33.450141    8332 api_server.go:141] control plane version: v1.31.1
	I0913 18:22:33.450163    8332 api_server.go:131] duration metric: took 10.586561ms to wait for apiserver health ...
	I0913 18:22:33.450177    8332 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 18:22:33.613755    8332 system_pods.go:59] 17 kube-system pods found
	I0913 18:22:33.613832    8332 system_pods.go:61] "coredns-7c65d6cfc9-f7rlh" [01cf10cf-73de-4fa2-b842-43ef8afb2a51] Running
	I0913 18:22:33.613858    8332 system_pods.go:61] "csi-hostpath-attacher-0" [8e8009ee-04f0-41c5-8de0-4a9675a24a9f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0913 18:22:33.613897    8332 system_pods.go:61] "csi-hostpath-resizer-0" [2d19668d-4293-4e75-ba75-b372ca51d9fa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0913 18:22:33.613929    8332 system_pods.go:61] "csi-hostpathplugin-wn77f" [d63346bf-2449-4745-8dda-1dfbdd6037a2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0913 18:22:33.613955    8332 system_pods.go:61] "etcd-addons-751971" [26a644b0-ae88-40c0-ac41-a1cb8c9d9d99] Running
	I0913 18:22:33.613979    8332 system_pods.go:61] "kube-apiserver-addons-751971" [1afce824-b84c-4053-b057-be263c51d991] Running
	I0913 18:22:33.614011    8332 system_pods.go:61] "kube-controller-manager-addons-751971" [4c0f689f-3bb4-415f-9b85-686eb97bf861] Running
	I0913 18:22:33.614057    8332 system_pods.go:61] "kube-ingress-dns-minikube" [6a9112d1-487e-41f4-aa6b-03845ffbd0ce] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0913 18:22:33.614078    8332 system_pods.go:61] "kube-proxy-xk8dq" [6cc3ee8d-6ddf-4a95-9d9f-722c1a52a313] Running
	I0913 18:22:33.614131    8332 system_pods.go:61] "kube-scheduler-addons-751971" [d506462c-49fb-4900-9b89-15ad973eddff] Running
	I0913 18:22:33.614163    8332 system_pods.go:61] "metrics-server-84c5f94fbc-b2vjl" [dd32f9dc-f42f-4f79-b76e-2b8a1e76dbee] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 18:22:33.614187    8332 system_pods.go:61] "nvidia-device-plugin-daemonset-8dklz" [4383ec80-7943-4b54-a0ff-b49159c7adc4] Running
	I0913 18:22:33.614212    8332 system_pods.go:61] "registry-66c9cd494c-btz86" [71148c5e-7525-45fb-8380-24b29240e9e4] Running
	I0913 18:22:33.614245    8332 system_pods.go:61] "registry-proxy-ftdlk" [6e2bf204-eddc-452f-8693-4f930b88a93b] Running
	I0913 18:22:33.614277    8332 system_pods.go:61] "snapshot-controller-56fcc65765-56vcv" [3f9f9969-27e4-4177-9089-ca71e657702f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0913 18:22:33.614306    8332 system_pods.go:61] "snapshot-controller-56fcc65765-snptv" [06813d57-8355-4d20-8374-da60f6e856af] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0913 18:22:33.614334    8332 system_pods.go:61] "storage-provisioner" [75851ff8-11e2-47a7-a898-976fa6d91d83] Running
	I0913 18:22:33.614372    8332 system_pods.go:74] duration metric: took 164.187379ms to wait for pod list to return data ...
	I0913 18:22:33.614396    8332 default_sa.go:34] waiting for default service account to be created ...
	I0913 18:22:33.705241    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:33.706524    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:33.801981    8332 default_sa.go:45] found service account: "default"
	I0913 18:22:33.802144    8332 default_sa.go:55] duration metric: took 187.726317ms for default service account to be created ...
	I0913 18:22:33.802179    8332 system_pods.go:116] waiting for k8s-apps to be running ...
	I0913 18:22:34.011054    8332 system_pods.go:86] 17 kube-system pods found
	I0913 18:22:34.011145    8332 system_pods.go:89] "coredns-7c65d6cfc9-f7rlh" [01cf10cf-73de-4fa2-b842-43ef8afb2a51] Running
	I0913 18:22:34.011173    8332 system_pods.go:89] "csi-hostpath-attacher-0" [8e8009ee-04f0-41c5-8de0-4a9675a24a9f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0913 18:22:34.011219    8332 system_pods.go:89] "csi-hostpath-resizer-0" [2d19668d-4293-4e75-ba75-b372ca51d9fa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0913 18:22:34.011252    8332 system_pods.go:89] "csi-hostpathplugin-wn77f" [d63346bf-2449-4745-8dda-1dfbdd6037a2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0913 18:22:34.011276    8332 system_pods.go:89] "etcd-addons-751971" [26a644b0-ae88-40c0-ac41-a1cb8c9d9d99] Running
	I0913 18:22:34.011304    8332 system_pods.go:89] "kube-apiserver-addons-751971" [1afce824-b84c-4053-b057-be263c51d991] Running
	I0913 18:22:34.011337    8332 system_pods.go:89] "kube-controller-manager-addons-751971" [4c0f689f-3bb4-415f-9b85-686eb97bf861] Running
	I0913 18:22:34.011365    8332 system_pods.go:89] "kube-ingress-dns-minikube" [6a9112d1-487e-41f4-aa6b-03845ffbd0ce] Running
	I0913 18:22:34.011389    8332 system_pods.go:89] "kube-proxy-xk8dq" [6cc3ee8d-6ddf-4a95-9d9f-722c1a52a313] Running
	I0913 18:22:34.011415    8332 system_pods.go:89] "kube-scheduler-addons-751971" [d506462c-49fb-4900-9b89-15ad973eddff] Running
	I0913 18:22:34.011454    8332 system_pods.go:89] "metrics-server-84c5f94fbc-b2vjl" [dd32f9dc-f42f-4f79-b76e-2b8a1e76dbee] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 18:22:34.011483    8332 system_pods.go:89] "nvidia-device-plugin-daemonset-8dklz" [4383ec80-7943-4b54-a0ff-b49159c7adc4] Running
	I0913 18:22:34.011508    8332 system_pods.go:89] "registry-66c9cd494c-btz86" [71148c5e-7525-45fb-8380-24b29240e9e4] Running
	I0913 18:22:34.011535    8332 system_pods.go:89] "registry-proxy-ftdlk" [6e2bf204-eddc-452f-8693-4f930b88a93b] Running
	I0913 18:22:34.011575    8332 system_pods.go:89] "snapshot-controller-56fcc65765-56vcv" [3f9f9969-27e4-4177-9089-ca71e657702f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0913 18:22:34.011608    8332 system_pods.go:89] "snapshot-controller-56fcc65765-snptv" [06813d57-8355-4d20-8374-da60f6e856af] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0913 18:22:34.011637    8332 system_pods.go:89] "storage-provisioner" [75851ff8-11e2-47a7-a898-976fa6d91d83] Running
	I0913 18:22:34.011664    8332 system_pods.go:126] duration metric: took 209.449319ms to wait for k8s-apps to be running ...
	I0913 18:22:34.011706    8332 system_svc.go:44] waiting for kubelet service to be running ....
	I0913 18:22:34.011806    8332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 18:22:34.029628    8332 system_svc.go:56] duration metric: took 17.914297ms WaitForService to wait for kubelet
	I0913 18:22:34.029701    8332 kubeadm.go:582] duration metric: took 36.960688352s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 18:22:34.029737    8332 node_conditions.go:102] verifying NodePressure condition ...
	I0913 18:22:34.125492    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:34.156187    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:34.201913    8332 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0913 18:22:34.201946    8332 node_conditions.go:123] node cpu capacity is 2
	I0913 18:22:34.201960    8332 node_conditions.go:105] duration metric: took 172.20519ms to run NodePressure ...
	I0913 18:22:34.201972    8332 start.go:241] waiting for startup goroutines ...
	I0913 18:22:34.624898    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:34.653440    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:35.125559    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:35.153759    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:35.625084    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:35.655058    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:36.125356    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:36.153975    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:36.625352    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:36.653980    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:37.125691    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:37.224900    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:37.625050    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:37.654434    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:38.125674    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:38.153409    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:38.625384    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:38.654759    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:39.125816    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:39.154065    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:39.627746    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:39.656772    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:40.126992    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:40.154303    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:40.627037    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:40.655217    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:41.125880    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:41.153747    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:41.632409    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:41.654613    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:42.147950    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:42.154186    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:42.624316    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:42.654692    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:43.125086    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:43.154572    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:43.625210    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:43.654253    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:44.135453    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:44.153797    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:44.625184    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:44.654126    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:45.150064    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:45.156669    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:45.625841    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:45.654357    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:46.127595    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:46.154172    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:46.625120    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:46.655804    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:47.125859    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:47.154470    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:47.625046    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:47.654912    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:48.124785    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:48.153965    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:48.624757    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:48.654210    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:49.125474    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:49.153782    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:49.626928    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:49.659909    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:50.125465    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:50.154489    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:50.625304    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:50.659686    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:51.125912    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:51.154099    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:51.626224    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:51.654753    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:52.125329    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:52.154038    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:52.624903    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:52.653613    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:53.124836    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:53.154086    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:53.625761    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:53.654363    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:54.125581    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:54.154287    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:54.625636    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:54.653534    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:55.125750    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:55.153817    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:55.625445    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:55.653854    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:56.124724    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:56.153713    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:56.624556    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:56.654376    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:57.128693    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:57.154274    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:57.624839    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:57.653707    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:58.126243    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:58.154982    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:58.624786    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:58.654647    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:59.126682    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:59.225015    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:22:59.625456    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:59.653547    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:00.141523    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:00.157729    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:00.625416    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:00.654764    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:01.125545    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:01.154745    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:01.626218    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:01.654356    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:02.125678    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:02.158944    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:02.626814    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:02.656095    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:03.124891    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:03.153608    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:03.625318    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:03.654274    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:04.125717    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:04.154238    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:04.627903    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:04.654344    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:05.125278    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:05.153703    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:05.625465    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:05.653592    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:06.125706    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:06.153864    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:06.625003    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:06.653601    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:07.125533    8332 kapi.go:107] duration metric: took 56.005720241s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0913 18:23:07.154348    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:07.654434    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:08.153864    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:08.653609    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:09.153359    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:09.653783    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:10.154559    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:10.653539    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:11.154203    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:11.654140    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:12.157287    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:12.653824    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:13.154661    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:13.653922    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:14.155905    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:14.655165    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:15.154470    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:15.654102    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:16.154839    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:16.654139    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:17.154762    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:17.655887    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:18.157260    8332 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 18:23:18.654952    8332 kapi.go:107] duration metric: took 1m11.005632553s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0913 18:23:34.908610    8332 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0913 18:23:34.908639    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:35.387689    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:35.887988    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:36.387432    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:36.888475    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:37.388175    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:37.888020    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:38.387547    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:38.888353    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:39.388061    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:39.887486    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:40.388517    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:40.888559    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:41.387802    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:41.889226    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:42.390964    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:42.887856    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:43.387502    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:43.888621    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:44.387545    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:44.888139    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:45.389863    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:45.888036    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:46.388630    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:46.887234    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:47.388468    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:47.888975    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:48.388320    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:48.888193    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:49.388679    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:49.887833    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:50.387790    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:50.887966    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:51.391567    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:51.887536    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:52.388626    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:52.887883    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:53.387671    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:53.887346    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:54.387585    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:54.887550    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:55.387735    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:55.887655    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:56.387630    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:56.888431    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:57.388670    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:57.887283    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:58.388304    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:58.888423    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:59.387087    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:59.888054    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:00.401206    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:00.887906    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:01.387868    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:01.888337    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:02.388918    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:02.888826    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:03.388103    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:03.888247    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:04.387820    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:04.888239    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:05.388203    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:05.887903    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:06.391964    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:06.888110    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:07.387747    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:07.888357    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:08.387675    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:08.887216    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:09.387795    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:09.887624    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:10.388212    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:10.888088    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:11.388149    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:11.888305    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:12.388490    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:12.888520    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:13.387269    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:13.888703    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:14.387478    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:14.888530    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:15.387454    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:15.892605    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:16.387758    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:16.887635    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:17.387610    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:17.887505    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:18.388594    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:18.890141    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:19.387977    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:19.887292    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:20.395135    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:20.887452    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:21.389204    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:21.888332    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:22.388435    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:22.888353    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:23.388115    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:23.888115    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:24.387404    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:24.888458    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:25.388718    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:25.888069    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:26.387207    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:26.888507    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:27.388955    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:27.887371    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:28.388272    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:28.888372    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:29.388266    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:29.887404    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:30.387687    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:30.887852    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:31.389063    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:31.887245    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:32.388476    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:32.888534    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:33.389743    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:33.887520    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:34.388289    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:34.888469    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:35.387725    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:35.887634    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:36.387651    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:36.887646    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:37.388269    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:37.888125    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:38.387826    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:38.888015    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:39.387905    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:39.888141    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:40.388063    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:40.889149    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:41.388213    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:41.888050    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:42.388590    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:42.887708    8332 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:43.387648    8332 kapi.go:107] duration metric: took 2m30.50359145s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0913 18:24:43.390191    8332 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-751971 cluster.
	I0913 18:24:43.392435    8332 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0913 18:24:43.394502    8332 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0913 18:24:43.396780    8332 out.go:177] * Enabled addons: storage-provisioner, ingress-dns, storage-provisioner-rancher, volcano, cloud-spanner, nvidia-device-plugin, metrics-server, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0913 18:24:43.398954    8332 addons.go:510] duration metric: took 2m46.329562189s for enable addons: enabled=[storage-provisioner ingress-dns storage-provisioner-rancher volcano cloud-spanner nvidia-device-plugin metrics-server inspektor-gadget yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0913 18:24:43.399016    8332 start.go:246] waiting for cluster config update ...
	I0913 18:24:43.399038    8332 start.go:255] writing updated cluster config ...
	I0913 18:24:43.399345    8332 ssh_runner.go:195] Run: rm -f paused
	I0913 18:24:43.752828    8332 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0913 18:24:43.755246    8332 out.go:177] * Done! kubectl is now configured to use "addons-751971" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 13 18:34:26 addons-751971 dockerd[1280]: time="2024-09-13T18:34:26.555430627Z" level=info msg="ignoring event" container=7dfe1054b92869ac12232d55b460c2df0f915796b904156085501926646bd1c8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 18:34:26 addons-751971 dockerd[1280]: time="2024-09-13T18:34:26.561938345Z" level=info msg="ignoring event" container=caafd6642070fc73305461cf784f9a9f52c919c596adaf3624691f4eadec0ab2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 18:34:26 addons-751971 dockerd[1280]: time="2024-09-13T18:34:26.572706929Z" level=info msg="ignoring event" container=c6bf80624c33cc26bdc302723e830ab3fe9ad6bf4df030ce64492768caeb1248 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 18:34:26 addons-751971 dockerd[1280]: time="2024-09-13T18:34:26.576880147Z" level=info msg="ignoring event" container=ea528a4b2b4afcc14a1b6caa89ac73cafb29082301b3286abf2c4d2c6caf6a0b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 18:34:26 addons-751971 dockerd[1280]: time="2024-09-13T18:34:26.594973978Z" level=info msg="ignoring event" container=f7aae6fb4c6bda988bbb751bfb4b95dd9389f5fc1b1c4e13486fa10ff47c79f0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 18:34:26 addons-751971 dockerd[1280]: time="2024-09-13T18:34:26.595018770Z" level=info msg="ignoring event" container=233d32314bf586558cd603a41bf01abff946b3898d0aacfc8b7fd7bb7ec83d0e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 18:34:26 addons-751971 dockerd[1280]: time="2024-09-13T18:34:26.617819204Z" level=info msg="ignoring event" container=aa97bca978df5a8c23b8c7b31a8aae47b16c74713ebcfca47741c3594cd6b261 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 18:34:26 addons-751971 dockerd[1280]: time="2024-09-13T18:34:26.724509893Z" level=info msg="ignoring event" container=7da0da3bee95fa4e33c4d6fd4af283cf8c0c6319e84d8faa577a23df33b18355 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 18:34:26 addons-751971 dockerd[1280]: time="2024-09-13T18:34:26.838257973Z" level=info msg="ignoring event" container=a87235824444b0af485b69b75b8c078cbec224f9a6d132bd1de44e7653b1526d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 18:34:26 addons-751971 dockerd[1280]: time="2024-09-13T18:34:26.891101970Z" level=info msg="ignoring event" container=ca76a20282b210fd22f54063bdfebc991b4441759670500e7bbaf1753ce76645 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 18:34:33 addons-751971 dockerd[1280]: time="2024-09-13T18:34:33.123976775Z" level=info msg="ignoring event" container=1297f7b5683a7ff90bb23a8653ad092991bd2d1a400bcb66be90f176e2b20c76 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 18:34:33 addons-751971 dockerd[1280]: time="2024-09-13T18:34:33.157162152Z" level=info msg="ignoring event" container=1fc93a3db5ceee4cb223921c3b7e2f3b386cec0e53bce7f323c7467661f14266 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 18:34:33 addons-751971 dockerd[1280]: time="2024-09-13T18:34:33.329910433Z" level=info msg="ignoring event" container=783ba918db311e75ca67dbfbc443b22c5e057cd2a7ed19114f07fcf5fdc66258 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 18:34:33 addons-751971 dockerd[1280]: time="2024-09-13T18:34:33.361416296Z" level=info msg="ignoring event" container=1f5d8f8eac69244a28dd340574bb9b92760a1b898cab41b32a16801e72b7506e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 18:34:39 addons-751971 dockerd[1280]: time="2024-09-13T18:34:39.773098920Z" level=info msg="ignoring event" container=08e4273a5fd10a53d8a1534eb83ffcc256fd8d49587087f46a3c2f522c22b2ae module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 18:34:39 addons-751971 dockerd[1280]: time="2024-09-13T18:34:39.941092567Z" level=info msg="ignoring event" container=fcbb44bcdf1371c297adba2f4e163704c799344688e638a3a0cf29c5306e8c75 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 18:34:40 addons-751971 dockerd[1280]: time="2024-09-13T18:34:40.390655563Z" level=info msg="ignoring event" container=a4e5bec75c3ff7a2ccad55df67a0b939cba16b1dfddfdd7cfa9a8d2bc1cd5f38 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 18:34:40 addons-751971 cri-dockerd[1536]: time="2024-09-13T18:34:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/db0cab6d2c47d2e869374a406febee6ba0a4fd402a2347121e5729952376f79a/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Sep 13 18:34:41 addons-751971 dockerd[1280]: time="2024-09-13T18:34:41.011937562Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 13 18:34:41 addons-751971 dockerd[1280]: time="2024-09-13T18:34:41.303916897Z" level=info msg="ignoring event" container=d3cff7e3d81b449aa5e2a89c2334593546d094d673293a12dd33a703163348f6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 18:34:41 addons-751971 dockerd[1280]: time="2024-09-13T18:34:41.478553074Z" level=info msg="ignoring event" container=d209aa52f8b50888d0a817afe8504b70d545ece045f4d6658419a2f52e8bf986 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 18:34:41 addons-751971 dockerd[1280]: time="2024-09-13T18:34:41.668820631Z" level=info msg="ignoring event" container=4a9a55af86e252d20a2ea77b1f57acf366f8a3a95e31c98e67fbb3e6e93395c3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 18:34:41 addons-751971 cri-dockerd[1536]: time="2024-09-13T18:34:41Z" level=info msg="Stop pulling image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: Status: Downloaded newer image for busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 13 18:34:41 addons-751971 dockerd[1280]: time="2024-09-13T18:34:41.880625545Z" level=info msg="ignoring event" container=355c969ac34cede2773b0fdb4bca030f6113d62c1ab64759b1c07369548e4a1d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 18:34:42 addons-751971 dockerd[1280]: time="2024-09-13T18:34:42.191472631Z" level=info msg="ignoring event" container=807db05e7b645041f7ba41315b85e9e09a7ea4f40e7f528709eb13d5f0098c2c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	807db05e7b645       busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79                                              1 second ago        Exited              helper-pod                0                   db0cab6d2c47d       helper-pod-create-pvc-f65c88c7-360e-4112-b64c-a202b9b629b8
	1d085950c91b6       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec            57 seconds ago      Exited              gadget                    7                   493a940121345       gadget-rrzsl
	e13ad66da97ad       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 10 minutes ago      Running             gcp-auth                  0                   f5bb152bf4764       gcp-auth-89d5ffd79-m2qrv
	757592eaa8c9f       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce             11 minutes ago      Running             controller                0                   63dfa70b14a2a       ingress-nginx-controller-bc57996ff-dmx8c
	88064bc01bd1c       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              patch                     0                   9061b0d5b3780       ingress-nginx-admission-patch-ptf9d
	2c73aaae083dd       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              create                    0                   d9fab9f563479       ingress-nginx-admission-create-hfss6
	d7921928798c4       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9        12 minutes ago      Running             metrics-server            0                   f8b3bc812e3af       metrics-server-84c5f94fbc-b2vjl
	b0404fc538fd7       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       12 minutes ago      Running             local-path-provisioner    0                   36dfabca691c1       local-path-provisioner-86d989889c-q6kw9
	1cffd4eeee9bc       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             12 minutes ago      Running             minikube-ingress-dns      0                   abb75aadeb9b0       kube-ingress-dns-minikube
	d209aa52f8b50       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367              12 minutes ago      Exited              registry-proxy            0                   355c969ac34ce       registry-proxy-ftdlk
	d3cff7e3d81b4       registry@sha256:ac0192b549007e22998eb74e8d8488dcfe70f1489520c3b144a6047ac5efbe90                                             12 minutes ago      Exited              registry                  0                   4a9a55af86e25       registry-66c9cd494c-btz86
	bd134ba0b3977       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc               12 minutes ago      Running             cloud-spanner-emulator    0                   b43816a9d6a35       cloud-spanner-emulator-769b77f747-w9b87
	7dc29f0268b90       ba04bb24b9575                                                                                                                12 minutes ago      Running             storage-provisioner       0                   e592ca700eaff       storage-provisioner
	5cf84a9c00fa5       2f6c962e7b831                                                                                                                12 minutes ago      Running             coredns                   0                   db38f2dafadb9       coredns-7c65d6cfc9-f7rlh
	b02f26a03559a       24a140c548c07                                                                                                                12 minutes ago      Running             kube-proxy                0                   57340563ee316       kube-proxy-xk8dq
	39cdf3be8fb02       27e3830e14027                                                                                                                12 minutes ago      Running             etcd                      0                   bcc5ba3db91a6       etcd-addons-751971
	c7c0f9529bf26       279f381cb3736                                                                                                                12 minutes ago      Running             kube-controller-manager   0                   28a2742b9b720       kube-controller-manager-addons-751971
	6fb24e1441a81       7f8aa378bb47d                                                                                                                12 minutes ago      Running             kube-scheduler            0                   ab0bf7044c264       kube-scheduler-addons-751971
	7ffe5f61cf6b7       d3f53a98c0a9d                                                                                                                12 minutes ago      Running             kube-apiserver            0                   651cabfc6a3db       kube-apiserver-addons-751971
	
	
	==> controller_ingress [757592eaa8c9] <==
	NGINX Ingress controller
	  Release:       v1.11.2
	  Build:         46e76e5916813cfca2a9b0bfdc34b69a0000f6b9
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.5
	
	-------------------------------------------------------------------------------
	
	I0913 18:23:18.045786       6 main.go:248] "Running in Kubernetes cluster" major="1" minor="31" git="v1.31.1" state="clean" commit="948afe5ca072329a73c8e79ed5938717a5cb3d21" platform="linux/arm64"
	I0913 18:23:19.816279       6 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0913 18:23:19.836757       6 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0913 18:23:19.857160       6 nginx.go:271] "Starting NGINX Ingress controller"
	I0913 18:23:19.871443       6 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"5e8ac601-1c6c-41c2-b81d-6830dc413045", APIVersion:"v1", ResourceVersion:"658", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0913 18:23:19.894803       6 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"6c8b0976-cf85-4267-979f-9c522135767b", APIVersion:"v1", ResourceVersion:"659", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0913 18:23:19.894868       6 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"c3975ec1-88d6-4ffa-b907-e662cb902b65", APIVersion:"v1", ResourceVersion:"660", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0913 18:23:21.058625       6 nginx.go:317] "Starting NGINX process"
	I0913 18:23:21.059059       6 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0913 18:23:21.060424       6 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0913 18:23:21.060865       6 controller.go:193] "Configuration changes detected, backend reload required"
	I0913 18:23:21.073697       6 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-dmx8c"
	I0913 18:23:21.073903       6 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0913 18:23:21.085241       6 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-dmx8c" node="addons-751971"
	I0913 18:23:21.107666       6 controller.go:213] "Backend successfully reloaded"
	I0913 18:23:21.107812       6 controller.go:224] "Initial sync, sleeping for 1 second"
	I0913 18:23:21.108033       6 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-dmx8c", UID:"9f178e15-9cac-4daf-860c-fe1147c8c271", APIVersion:"v1", ResourceVersion:"1235", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	
	
	==> coredns [5cf84a9c00fa] <==
	[INFO] 10.244.0.7:35073 - 48319 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000064443s
	[INFO] 10.244.0.7:55959 - 19730 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002252454s
	[INFO] 10.244.0.7:55959 - 58896 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002294333s
	[INFO] 10.244.0.7:40997 - 48372 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000096691s
	[INFO] 10.244.0.7:40997 - 55031 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00008037s
	[INFO] 10.244.0.7:54397 - 48208 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000123727s
	[INFO] 10.244.0.7:54397 - 15695 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00004768s
	[INFO] 10.244.0.7:48181 - 21714 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000042479s
	[INFO] 10.244.0.7:48181 - 15831 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000035381s
	[INFO] 10.244.0.7:45450 - 60013 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000041863s
	[INFO] 10.244.0.7:45450 - 55403 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000034478s
	[INFO] 10.244.0.7:58734 - 13302 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002914295s
	[INFO] 10.244.0.7:58734 - 2036 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.003056228s
	[INFO] 10.244.0.7:60704 - 54083 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000057666s
	[INFO] 10.244.0.7:60704 - 22592 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000037096s
	[INFO] 10.244.0.25:45790 - 63764 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000221516s
	[INFO] 10.244.0.25:32956 - 53004 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000130019s
	[INFO] 10.244.0.25:59627 - 58488 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000139152s
	[INFO] 10.244.0.25:50410 - 40167 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000125581s
	[INFO] 10.244.0.25:53604 - 44157 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000162094s
	[INFO] 10.244.0.25:38789 - 41984 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000180818s
	[INFO] 10.244.0.25:40478 - 9338 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002226502s
	[INFO] 10.244.0.25:49537 - 45913 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001925519s
	[INFO] 10.244.0.25:40735 - 26168 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.002659826s
	[INFO] 10.244.0.25:43116 - 40121 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002811597s
	
	
	==> describe nodes <==
	Name:               addons-751971
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-751971
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92
	                    minikube.k8s.io/name=addons-751971
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_13T18_21_52_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-751971
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 18:21:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-751971
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Sep 2024 18:34:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Sep 2024 18:30:34 +0000   Fri, 13 Sep 2024 18:21:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Sep 2024 18:30:34 +0000   Fri, 13 Sep 2024 18:21:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Sep 2024 18:30:34 +0000   Fri, 13 Sep 2024 18:21:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Sep 2024 18:30:34 +0000   Fri, 13 Sep 2024 18:21:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-751971
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 79e7890c98fd4aa0a58e138259f4ffb9
	  System UUID:                f5120785-04a3-4e02-866f-afb666a858aa
	  Boot ID:                    fb4fe98f-a4cf-4734-a051-ddaac6a0a8ad
	  Kernel Version:             5.15.0-1069-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (16 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m17s
	  default                     cloud-spanner-emulator-769b77f747-w9b87                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gadget                      gadget-rrzsl                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gcp-auth                    gcp-auth-89d5ffd79-m2qrv                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-dmx8c                      100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-f7rlh                                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-addons-751971                                            100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-751971                                  250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-751971                         200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-xk8dq                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-751971                                  100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-84c5f94fbc-b2vjl                               100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         12m
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          helper-pod-create-pvc-f65c88c7-360e-4112-b64c-a202b9b629b8    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  local-path-storage          local-path-provisioner-86d989889c-q6kw9                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  0 (0%)
	  memory             460Mi (5%)  170Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 12m   kube-proxy       
	  Normal   Starting                 12m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  12m   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m   kubelet          Node addons-751971 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m   kubelet          Node addons-751971 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m   kubelet          Node addons-751971 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m   node-controller  Node addons-751971 event: Registered Node addons-751971 in Controller
	
	
	==> dmesg <==
	[Sep13 18:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.016065] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.481435] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.777977] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.548436] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> etcd [39cdf3be8fb0] <==
	{"level":"info","ts":"2024-09-13T18:21:46.428271Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-13T18:21:47.062080Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-13T18:21:47.062324Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-13T18:21:47.062432Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-09-13T18:21:47.062539Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-09-13T18:21:47.062624Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-13T18:21:47.062717Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-13T18:21:47.062809Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-13T18:21:47.066331Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-13T18:21:47.066788Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-13T18:21:47.066302Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-751971 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-13T18:21:47.067212Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-13T18:21:47.067458Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-13T18:21:47.067583Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-13T18:21:47.068295Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-13T18:21:47.070284Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-13T18:21:47.069022Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-13T18:21:47.075145Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-13T18:21:47.069058Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-13T18:21:47.075615Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-13T18:21:47.080174Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-13T18:22:10.461154Z","caller":"traceutil/trace.go:171","msg":"trace[1145434819] transaction","detail":"{read_only:false; response_revision:836; number_of_response:1; }","duration":"110.404295ms","start":"2024-09-13T18:22:10.350725Z","end":"2024-09-13T18:22:10.461129Z","steps":["trace[1145434819] 'process raft request'  (duration: 74.977942ms)","trace[1145434819] 'compare'  (duration: 35.335374ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-13T18:31:47.116283Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1850}
	{"level":"info","ts":"2024-09-13T18:31:47.176689Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1850,"took":"59.880116ms","hash":1481345617,"current-db-size-bytes":8859648,"current-db-size":"8.9 MB","current-db-size-in-use-bytes":4915200,"current-db-size-in-use":"4.9 MB"}
	{"level":"info","ts":"2024-09-13T18:31:47.176741Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1481345617,"revision":1850,"compact-revision":-1}
	
	
	==> gcp-auth [e13ad66da97a] <==
	2024/09/13 18:24:42 GCP Auth Webhook started!
	2024/09/13 18:25:00 Ready to marshal response ...
	2024/09/13 18:25:00 Ready to write response ...
	2024/09/13 18:25:00 Ready to marshal response ...
	2024/09/13 18:25:00 Ready to write response ...
	2024/09/13 18:25:25 Ready to marshal response ...
	2024/09/13 18:25:25 Ready to write response ...
	2024/09/13 18:25:25 Ready to marshal response ...
	2024/09/13 18:25:25 Ready to write response ...
	2024/09/13 18:25:25 Ready to marshal response ...
	2024/09/13 18:25:25 Ready to write response ...
	2024/09/13 18:33:40 Ready to marshal response ...
	2024/09/13 18:33:40 Ready to write response ...
	2024/09/13 18:33:48 Ready to marshal response ...
	2024/09/13 18:33:48 Ready to write response ...
	2024/09/13 18:34:16 Ready to marshal response ...
	2024/09/13 18:34:16 Ready to write response ...
	2024/09/13 18:34:40 Ready to marshal response ...
	2024/09/13 18:34:40 Ready to write response ...
	2024/09/13 18:34:40 Ready to marshal response ...
	2024/09/13 18:34:40 Ready to write response ...
	
	
	==> kernel <==
	 18:34:43 up 17 min,  0 users,  load average: 0.57, 0.55, 0.52
	Linux addons-751971 5.15.0-1069-aws #75~20.04.1-Ubuntu SMP Mon Aug 19 16:22:47 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kube-apiserver [7ffe5f61cf6b] <==
	I0913 18:25:16.568065       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0913 18:25:16.618828       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0913 18:25:16.666459       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0913 18:25:16.696516       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0913 18:25:17.307013       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0913 18:25:17.314025       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0913 18:25:17.314035       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0913 18:25:17.389305       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0913 18:25:17.667113       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0913 18:25:17.913883       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0913 18:33:56.336625       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0913 18:34:25.255839       1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I0913 18:34:32.806626       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0913 18:34:32.806671       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0913 18:34:32.841138       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0913 18:34:32.845864       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0913 18:34:32.854025       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0913 18:34:32.854157       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0913 18:34:32.876128       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0913 18:34:32.876189       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0913 18:34:32.903531       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0913 18:34:32.903646       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0913 18:34:33.855777       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0913 18:34:33.904574       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0913 18:34:33.945756       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [c7c0f9529bf2] <==
	E0913 18:34:33.171510       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0913 18:34:33.857754       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0913 18:34:33.906272       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0913 18:34:33.947636       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 18:34:34.857499       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 18:34:34.857541       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 18:34:35.315022       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 18:34:35.315078       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 18:34:35.519748       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 18:34:35.519812       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 18:34:36.275624       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 18:34:36.275679       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 18:34:37.421059       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 18:34:37.421119       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 18:34:38.016820       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 18:34:38.016871       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 18:34:38.175828       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 18:34:38.176054       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0913 18:34:41.132701       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="5.637µs"
	W0913 18:34:41.672769       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 18:34:41.672818       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 18:34:42.436673       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 18:34:42.436719       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 18:34:43.062408       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 18:34:43.062451       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [b02f26a03559] <==
	I0913 18:21:58.499894       1 server_linux.go:66] "Using iptables proxy"
	I0913 18:21:58.609729       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0913 18:21:58.609791       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0913 18:21:58.668831       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0913 18:21:58.668895       1 server_linux.go:169] "Using iptables Proxier"
	I0913 18:21:58.674688       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0913 18:21:58.675003       1 server.go:483] "Version info" version="v1.31.1"
	I0913 18:21:58.675020       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 18:21:58.679066       1 config.go:199] "Starting service config controller"
	I0913 18:21:58.679096       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0913 18:21:58.679123       1 config.go:105] "Starting endpoint slice config controller"
	I0913 18:21:58.679130       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0913 18:21:58.679990       1 config.go:328] "Starting node config controller"
	I0913 18:21:58.680002       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0913 18:21:58.779500       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0913 18:21:58.779561       1 shared_informer.go:320] Caches are synced for service config
	I0913 18:21:58.780652       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6fb24e1441a8] <==
	W0913 18:21:49.571987       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0913 18:21:49.574566       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 18:21:49.572045       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0913 18:21:49.574732       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0913 18:21:49.572087       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0913 18:21:49.574909       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 18:21:49.572132       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0913 18:21:49.575068       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0913 18:21:49.572176       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0913 18:21:49.575262       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0913 18:21:49.572222       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0913 18:21:49.575436       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 18:21:50.414560       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0913 18:21:50.414838       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0913 18:21:50.446545       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0913 18:21:50.446864       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 18:21:50.494279       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0913 18:21:50.494532       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 18:21:50.570951       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0913 18:21:50.571199       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 18:21:50.577540       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0913 18:21:50.577797       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0913 18:21:50.634878       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0913 18:21:50.635137       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0913 18:21:53.363595       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 13 18:34:40 addons-751971 kubelet[2322]: I0913 18:34:40.585384    2322 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/22827f89-9655-46bd-bbf7-c0a5427398c8-gcp-creds\") pod \"22827f89-9655-46bd-bbf7-c0a5427398c8\" (UID: \"22827f89-9655-46bd-bbf7-c0a5427398c8\") "
	Sep 13 18:34:40 addons-751971 kubelet[2322]: I0913 18:34:40.585859    2322 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mfsz\" (UniqueName: \"kubernetes.io/projected/22827f89-9655-46bd-bbf7-c0a5427398c8-kube-api-access-8mfsz\") pod \"22827f89-9655-46bd-bbf7-c0a5427398c8\" (UID: \"22827f89-9655-46bd-bbf7-c0a5427398c8\") "
	Sep 13 18:34:40 addons-751971 kubelet[2322]: I0913 18:34:40.585648    2322 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22827f89-9655-46bd-bbf7-c0a5427398c8-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "22827f89-9655-46bd-bbf7-c0a5427398c8" (UID: "22827f89-9655-46bd-bbf7-c0a5427398c8"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 13 18:34:40 addons-751971 kubelet[2322]: I0913 18:34:40.588414    2322 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22827f89-9655-46bd-bbf7-c0a5427398c8-kube-api-access-8mfsz" (OuterVolumeSpecName: "kube-api-access-8mfsz") pod "22827f89-9655-46bd-bbf7-c0a5427398c8" (UID: "22827f89-9655-46bd-bbf7-c0a5427398c8"). InnerVolumeSpecName "kube-api-access-8mfsz". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 13 18:34:40 addons-751971 kubelet[2322]: I0913 18:34:40.593437    2322 scope.go:117] "RemoveContainer" containerID="08e4273a5fd10a53d8a1534eb83ffcc256fd8d49587087f46a3c2f522c22b2ae"
	Sep 13 18:34:40 addons-751971 kubelet[2322]: I0913 18:34:40.674681    2322 scope.go:117] "RemoveContainer" containerID="08e4273a5fd10a53d8a1534eb83ffcc256fd8d49587087f46a3c2f522c22b2ae"
	Sep 13 18:34:40 addons-751971 kubelet[2322]: E0913 18:34:40.675741    2322 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 08e4273a5fd10a53d8a1534eb83ffcc256fd8d49587087f46a3c2f522c22b2ae" containerID="08e4273a5fd10a53d8a1534eb83ffcc256fd8d49587087f46a3c2f522c22b2ae"
	Sep 13 18:34:40 addons-751971 kubelet[2322]: I0913 18:34:40.675780    2322 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"08e4273a5fd10a53d8a1534eb83ffcc256fd8d49587087f46a3c2f522c22b2ae"} err="failed to get container status \"08e4273a5fd10a53d8a1534eb83ffcc256fd8d49587087f46a3c2f522c22b2ae\": rpc error: code = Unknown desc = Error response from daemon: No such container: 08e4273a5fd10a53d8a1534eb83ffcc256fd8d49587087f46a3c2f522c22b2ae"
	Sep 13 18:34:40 addons-751971 kubelet[2322]: I0913 18:34:40.687236    2322 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/22827f89-9655-46bd-bbf7-c0a5427398c8-gcp-creds\") on node \"addons-751971\" DevicePath \"\""
	Sep 13 18:34:40 addons-751971 kubelet[2322]: I0913 18:34:40.687276    2322 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-8mfsz\" (UniqueName: \"kubernetes.io/projected/22827f89-9655-46bd-bbf7-c0a5427398c8-kube-api-access-8mfsz\") on node \"addons-751971\" DevicePath \"\""
	Sep 13 18:34:41 addons-751971 kubelet[2322]: I0913 18:34:41.109230    2322 scope.go:117] "RemoveContainer" containerID="1d085950c91b67cf7889a667638d455d8709f011dfe84aac6f9bc732baf2cccd"
	Sep 13 18:34:41 addons-751971 kubelet[2322]: E0913 18:34:41.109403    2322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-rrzsl_gadget(f86babd5-6c92-4567-8829-fe55d3a566cd)\"" pod="gadget/gadget-rrzsl" podUID="f86babd5-6c92-4567-8829-fe55d3a566cd"
	Sep 13 18:34:41 addons-751971 kubelet[2322]: I0913 18:34:41.913298    2322 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvsv7\" (UniqueName: \"kubernetes.io/projected/71148c5e-7525-45fb-8380-24b29240e9e4-kube-api-access-qvsv7\") pod \"71148c5e-7525-45fb-8380-24b29240e9e4\" (UID: \"71148c5e-7525-45fb-8380-24b29240e9e4\") "
	Sep 13 18:34:41 addons-751971 kubelet[2322]: I0913 18:34:41.929091    2322 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71148c5e-7525-45fb-8380-24b29240e9e4-kube-api-access-qvsv7" (OuterVolumeSpecName: "kube-api-access-qvsv7") pod "71148c5e-7525-45fb-8380-24b29240e9e4" (UID: "71148c5e-7525-45fb-8380-24b29240e9e4"). InnerVolumeSpecName "kube-api-access-qvsv7". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 13 18:34:42 addons-751971 kubelet[2322]: I0913 18:34:42.015003    2322 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-qvsv7\" (UniqueName: \"kubernetes.io/projected/71148c5e-7525-45fb-8380-24b29240e9e4-kube-api-access-qvsv7\") on node \"addons-751971\" DevicePath \"\""
	Sep 13 18:34:42 addons-751971 kubelet[2322]: I0913 18:34:42.117477    2322 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlv7k\" (UniqueName: \"kubernetes.io/projected/6e2bf204-eddc-452f-8693-4f930b88a93b-kube-api-access-jlv7k\") pod \"6e2bf204-eddc-452f-8693-4f930b88a93b\" (UID: \"6e2bf204-eddc-452f-8693-4f930b88a93b\") "
	Sep 13 18:34:42 addons-751971 kubelet[2322]: I0913 18:34:42.127184    2322 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e2bf204-eddc-452f-8693-4f930b88a93b-kube-api-access-jlv7k" (OuterVolumeSpecName: "kube-api-access-jlv7k") pod "6e2bf204-eddc-452f-8693-4f930b88a93b" (UID: "6e2bf204-eddc-452f-8693-4f930b88a93b"). InnerVolumeSpecName "kube-api-access-jlv7k". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 13 18:34:42 addons-751971 kubelet[2322]: I0913 18:34:42.129720    2322 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22827f89-9655-46bd-bbf7-c0a5427398c8" path="/var/lib/kubelet/pods/22827f89-9655-46bd-bbf7-c0a5427398c8/volumes"
	Sep 13 18:34:42 addons-751971 kubelet[2322]: I0913 18:34:42.130522    2322 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4383ec80-7943-4b54-a0ff-b49159c7adc4" path="/var/lib/kubelet/pods/4383ec80-7943-4b54-a0ff-b49159c7adc4/volumes"
	Sep 13 18:34:42 addons-751971 kubelet[2322]: I0913 18:34:42.219356    2322 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-jlv7k\" (UniqueName: \"kubernetes.io/projected/6e2bf204-eddc-452f-8693-4f930b88a93b-kube-api-access-jlv7k\") on node \"addons-751971\" DevicePath \"\""
	Sep 13 18:34:42 addons-751971 kubelet[2322]: I0913 18:34:42.710980    2322 scope.go:117] "RemoveContainer" containerID="d209aa52f8b50888d0a817afe8504b70d545ece045f4d6658419a2f52e8bf986"
	Sep 13 18:34:42 addons-751971 kubelet[2322]: I0913 18:34:42.779580    2322 scope.go:117] "RemoveContainer" containerID="d209aa52f8b50888d0a817afe8504b70d545ece045f4d6658419a2f52e8bf986"
	Sep 13 18:34:42 addons-751971 kubelet[2322]: E0913 18:34:42.781669    2322 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: d209aa52f8b50888d0a817afe8504b70d545ece045f4d6658419a2f52e8bf986" containerID="d209aa52f8b50888d0a817afe8504b70d545ece045f4d6658419a2f52e8bf986"
	Sep 13 18:34:42 addons-751971 kubelet[2322]: I0913 18:34:42.781891    2322 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"d209aa52f8b50888d0a817afe8504b70d545ece045f4d6658419a2f52e8bf986"} err="failed to get container status \"d209aa52f8b50888d0a817afe8504b70d545ece045f4d6658419a2f52e8bf986\": rpc error: code = Unknown desc = Error response from daemon: No such container: d209aa52f8b50888d0a817afe8504b70d545ece045f4d6658419a2f52e8bf986"
	Sep 13 18:34:42 addons-751971 kubelet[2322]: I0913 18:34:42.783121    2322 scope.go:117] "RemoveContainer" containerID="d3cff7e3d81b449aa5e2a89c2334593546d094d673293a12dd33a703163348f6"
	
	
	==> storage-provisioner [7dc29f0268b9] <==
	I0913 18:22:03.410383       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0913 18:22:03.427430       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0913 18:22:03.427535       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0913 18:22:03.440559       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0913 18:22:03.440778       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-751971_87bb373c-794b-45f6-bbd2-ba79e22ee44f!
	I0913 18:22:03.442032       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bcf4cae8-6f05-4449-b24c-ff40449659b7", APIVersion:"v1", ResourceVersion:"494", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-751971_87bb373c-794b-45f6-bbd2-ba79e22ee44f became leader
	I0913 18:22:03.541871       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-751971_87bb373c-794b-45f6-bbd2-ba79e22ee44f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-751971 -n addons-751971
helpers_test.go:261: (dbg) Run:  kubectl --context addons-751971 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox test-local-path ingress-nginx-admission-create-hfss6 ingress-nginx-admission-patch-ptf9d helper-pod-create-pvc-f65c88c7-360e-4112-b64c-a202b9b629b8
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-751971 describe pod busybox test-local-path ingress-nginx-admission-create-hfss6 ingress-nginx-admission-patch-ptf9d helper-pod-create-pvc-f65c88c7-360e-4112-b64c-a202b9b629b8
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-751971 describe pod busybox test-local-path ingress-nginx-admission-create-hfss6 ingress-nginx-admission-patch-ptf9d helper-pod-create-pvc-f65c88c7-360e-4112-b64c-a202b9b629b8: exit status 1 (123.297948ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-751971/192.168.49.2
	Start Time:       Fri, 13 Sep 2024 18:25:25 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8pt77 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-8pt77:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m19s                   default-scheduler  Successfully assigned default/busybox to addons-751971
	  Warning  Failed     7m58s (x6 over 9m17s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    7m46s (x4 over 9m18s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m46s (x4 over 9m18s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m46s (x4 over 9m18s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m17s (x21 over 9m17s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-st2zv (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-st2zv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:            <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-hfss6" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-ptf9d" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-f65c88c7-360e-4112-b64c-a202b9b629b8" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-751971 describe pod busybox test-local-path ingress-nginx-admission-create-hfss6 ingress-nginx-admission-patch-ptf9d helper-pod-create-pvc-f65c88c7-360e-4112-b64c-a202b9b629b8: exit status 1
--- FAIL: TestAddons/parallel/Registry (75.21s)

                                                
                                    

Test pass (318/342)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 7.64
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.31.1/json-events 8.21
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.07
18 TestDownloadOnly/v1.31.1/DeleteAll 0.21
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.59
22 TestOffline 84.42
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 222.23
29 TestAddons/serial/Volcano 41.69
31 TestAddons/serial/GCPAuth/Namespaces 0.19
34 TestAddons/parallel/Ingress 18.52
35 TestAddons/parallel/InspektorGadget 10.75
36 TestAddons/parallel/MetricsServer 5.72
38 TestAddons/parallel/CSI 52.42
39 TestAddons/parallel/Headlamp 17.65
40 TestAddons/parallel/CloudSpanner 5.72
41 TestAddons/parallel/LocalPath 52.69
42 TestAddons/parallel/NvidiaDevicePlugin 6.48
43 TestAddons/parallel/Yakd 11.8
44 TestAddons/StoppedEnableDisable 6.05
45 TestCertOptions 37.87
46 TestCertExpiration 254.7
47 TestDockerFlags 47.47
48 TestForceSystemdFlag 36.73
49 TestForceSystemdEnv 43.35
55 TestErrorSpam/setup 33.4
56 TestErrorSpam/start 0.74
57 TestErrorSpam/status 1.18
58 TestErrorSpam/pause 1.5
59 TestErrorSpam/unpause 1.4
60 TestErrorSpam/stop 2.08
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 76.98
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 35.66
67 TestFunctional/serial/KubeContext 0.07
68 TestFunctional/serial/KubectlGetPods 0.11
71 TestFunctional/serial/CacheCmd/cache/add_remote 3.12
72 TestFunctional/serial/CacheCmd/cache/add_local 0.99
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
74 TestFunctional/serial/CacheCmd/cache/list 0.05
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.59
77 TestFunctional/serial/CacheCmd/cache/delete 0.1
78 TestFunctional/serial/MinikubeKubectlCmd 0.14
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
80 TestFunctional/serial/ExtraConfig 45.58
81 TestFunctional/serial/ComponentHealth 0.09
82 TestFunctional/serial/LogsCmd 1.12
83 TestFunctional/serial/LogsFileCmd 1.17
84 TestFunctional/serial/InvalidService 4.84
86 TestFunctional/parallel/ConfigCmd 0.45
87 TestFunctional/parallel/DashboardCmd 13.13
88 TestFunctional/parallel/DryRun 0.5
89 TestFunctional/parallel/InternationalLanguage 0.26
90 TestFunctional/parallel/StatusCmd 1.25
94 TestFunctional/parallel/ServiceCmdConnect 10.63
95 TestFunctional/parallel/AddonsCmd 0.17
96 TestFunctional/parallel/PersistentVolumeClaim 27.48
98 TestFunctional/parallel/SSHCmd 0.71
99 TestFunctional/parallel/CpCmd 2.46
101 TestFunctional/parallel/FileSync 0.38
102 TestFunctional/parallel/CertSync 2.19
106 TestFunctional/parallel/NodeLabels 0.13
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.35
110 TestFunctional/parallel/License 0.35
112 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.65
113 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
115 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.47
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
117 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
121 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
122 TestFunctional/parallel/ServiceCmd/DeployApp 7.26
123 TestFunctional/parallel/ServiceCmd/List 0.57
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.57
125 TestFunctional/parallel/ServiceCmd/JSONOutput 0.67
126 TestFunctional/parallel/ProfileCmd/profile_list 0.53
127 TestFunctional/parallel/ServiceCmd/HTTPS 0.47
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.51
129 TestFunctional/parallel/ServiceCmd/Format 0.64
130 TestFunctional/parallel/MountCmd/any-port 9.8
131 TestFunctional/parallel/ServiceCmd/URL 0.5
132 TestFunctional/parallel/MountCmd/specific-port 2.1
133 TestFunctional/parallel/MountCmd/VerifyCleanup 2.55
134 TestFunctional/parallel/Version/short 0.07
135 TestFunctional/parallel/Version/components 1.1
136 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
137 TestFunctional/parallel/ImageCommands/ImageListTable 0.3
138 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
139 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
140 TestFunctional/parallel/ImageCommands/ImageBuild 3.2
141 TestFunctional/parallel/ImageCommands/Setup 1.05
142 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.25
143 TestFunctional/parallel/DockerEnv/bash 1.24
144 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.99
145 TestFunctional/parallel/UpdateContextCmd/no_changes 0.2
146 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.18
147 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.2
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.29
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.4
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.51
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.81
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.4
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 127.35
160 TestMultiControlPlane/serial/DeployApp 44.19
161 TestMultiControlPlane/serial/PingHostFromPods 1.75
162 TestMultiControlPlane/serial/AddWorkerNode 24.27
163 TestMultiControlPlane/serial/NodeLabels 0.11
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.82
165 TestMultiControlPlane/serial/CopyFile 20.6
166 TestMultiControlPlane/serial/StopSecondaryNode 11.79
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.57
168 TestMultiControlPlane/serial/RestartSecondaryNode 63.95
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.82
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 227.28
171 TestMultiControlPlane/serial/DeleteSecondaryNode 11.46
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.58
173 TestMultiControlPlane/serial/StopCluster 32.96
174 TestMultiControlPlane/serial/RestartCluster 93
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.57
176 TestMultiControlPlane/serial/AddSecondaryNode 45.27
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.82
180 TestImageBuild/serial/Setup 35.2
181 TestImageBuild/serial/NormalBuild 1.96
182 TestImageBuild/serial/BuildWithBuildArg 1.42
183 TestImageBuild/serial/BuildWithDockerIgnore 0.95
184 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.79
188 TestJSONOutput/start/Command 77.71
189 TestJSONOutput/start/Audit 0
191 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/pause/Command 0.61
195 TestJSONOutput/pause/Audit 0
197 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/unpause/Command 0.55
201 TestJSONOutput/unpause/Audit 0
203 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/stop/Command 10.87
207 TestJSONOutput/stop/Audit 0
209 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
210 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
211 TestErrorJSONOutput 0.21
213 TestKicCustomNetwork/create_custom_network 33.33
214 TestKicCustomNetwork/use_default_bridge_network 35.07
215 TestKicExistingNetwork 36.28
216 TestKicCustomSubnet 37.15
217 TestKicStaticIP 32.58
218 TestMainNoArgs 0.06
219 TestMinikubeProfile 69.3
222 TestMountStart/serial/StartWithMountFirst 7.83
223 TestMountStart/serial/VerifyMountFirst 0.28
224 TestMountStart/serial/StartWithMountSecond 10.62
225 TestMountStart/serial/VerifyMountSecond 0.27
226 TestMountStart/serial/DeleteFirst 1.53
227 TestMountStart/serial/VerifyMountPostDelete 0.27
228 TestMountStart/serial/Stop 1.21
229 TestMountStart/serial/RestartStopped 9.33
230 TestMountStart/serial/VerifyMountPostStop 0.27
233 TestMultiNode/serial/FreshStart2Nodes 88.45
234 TestMultiNode/serial/DeployApp2Nodes 47.45
235 TestMultiNode/serial/PingHostFrom2Pods 1.1
236 TestMultiNode/serial/AddNode 18.67
237 TestMultiNode/serial/MultiNodeLabels 0.1
238 TestMultiNode/serial/ProfileList 0.4
239 TestMultiNode/serial/CopyFile 10.81
240 TestMultiNode/serial/StopNode 2.31
241 TestMultiNode/serial/StartAfterStop 11.14
242 TestMultiNode/serial/RestartKeepsNodes 108.53
243 TestMultiNode/serial/DeleteNode 5.64
244 TestMultiNode/serial/StopMultiNode 21.8
245 TestMultiNode/serial/RestartMultiNode 56.73
246 TestMultiNode/serial/ValidateNameConflict 35.41
251 TestPreload 105.29
253 TestScheduledStopUnix 105.34
254 TestSkaffold 119.79
256 TestInsufficientStorage 13.77
257 TestRunningBinaryUpgrade 135.21
259 TestKubernetesUpgrade 391.72
260 TestMissingContainerUpgrade 120.37
272 TestStoppedBinaryUpgrade/Setup 0.89
273 TestStoppedBinaryUpgrade/Upgrade 84.95
274 TestStoppedBinaryUpgrade/MinikubeLogs 1.32
276 TestPause/serial/Start 47.08
277 TestPause/serial/SecondStartNoReconfiguration 30.74
278 TestPause/serial/Pause 0.61
279 TestPause/serial/VerifyStatus 0.36
280 TestPause/serial/Unpause 0.52
281 TestPause/serial/PauseAgain 1.09
282 TestPause/serial/DeletePaused 2.23
283 TestPause/serial/VerifyDeletedResources 0.35
292 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
293 TestNoKubernetes/serial/StartWithK8s 36.49
294 TestNoKubernetes/serial/StartWithStopK8s 18.92
295 TestNoKubernetes/serial/Start 8.86
296 TestNoKubernetes/serial/VerifyK8sNotRunning 0.3
297 TestNoKubernetes/serial/ProfileList 0.92
298 TestNoKubernetes/serial/Stop 1.2
299 TestNoKubernetes/serial/StartNoArgs 8.41
300 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
301 TestNetworkPlugins/group/auto/Start 55.19
302 TestNetworkPlugins/group/auto/KubeletFlags 0.41
303 TestNetworkPlugins/group/auto/NetCatPod 13.4
304 TestNetworkPlugins/group/flannel/Start 61.6
305 TestNetworkPlugins/group/auto/DNS 0.27
306 TestNetworkPlugins/group/auto/Localhost 0.28
307 TestNetworkPlugins/group/auto/HairPin 0.23
308 TestNetworkPlugins/group/calico/Start 82.39
309 TestNetworkPlugins/group/flannel/ControllerPod 6.01
310 TestNetworkPlugins/group/flannel/KubeletFlags 0.44
311 TestNetworkPlugins/group/flannel/NetCatPod 12.43
312 TestNetworkPlugins/group/flannel/DNS 0.37
313 TestNetworkPlugins/group/flannel/Localhost 0.3
314 TestNetworkPlugins/group/flannel/HairPin 0.4
315 TestNetworkPlugins/group/custom-flannel/Start 55.2
316 TestNetworkPlugins/group/calico/ControllerPod 6.01
317 TestNetworkPlugins/group/calico/KubeletFlags 0.36
318 TestNetworkPlugins/group/calico/NetCatPod 12.33
319 TestNetworkPlugins/group/calico/DNS 0.26
320 TestNetworkPlugins/group/calico/Localhost 0.23
321 TestNetworkPlugins/group/calico/HairPin 0.24
322 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.5
323 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.33
324 TestNetworkPlugins/group/false/Start 56.08
325 TestNetworkPlugins/group/custom-flannel/DNS 0.29
326 TestNetworkPlugins/group/custom-flannel/Localhost 0.22
327 TestNetworkPlugins/group/custom-flannel/HairPin 0.24
328 TestNetworkPlugins/group/kindnet/Start 72.69
329 TestNetworkPlugins/group/false/KubeletFlags 0.4
330 TestNetworkPlugins/group/false/NetCatPod 12.36
331 TestNetworkPlugins/group/false/DNS 0.32
332 TestNetworkPlugins/group/false/Localhost 0.31
333 TestNetworkPlugins/group/false/HairPin 0.26
334 TestNetworkPlugins/group/kubenet/Start 47.61
335 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
336 TestNetworkPlugins/group/kindnet/KubeletFlags 0.4
337 TestNetworkPlugins/group/kindnet/NetCatPod 11.39
338 TestNetworkPlugins/group/kindnet/DNS 0.28
339 TestNetworkPlugins/group/kindnet/Localhost 0.16
340 TestNetworkPlugins/group/kindnet/HairPin 0.34
341 TestNetworkPlugins/group/kubenet/KubeletFlags 0.46
342 TestNetworkPlugins/group/kubenet/NetCatPod 11.43
343 TestNetworkPlugins/group/enable-default-cni/Start 81.9
344 TestNetworkPlugins/group/kubenet/DNS 0.3
345 TestNetworkPlugins/group/kubenet/Localhost 0.23
346 TestNetworkPlugins/group/kubenet/HairPin 0.29
347 TestNetworkPlugins/group/bridge/Start 53.72
348 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
349 TestNetworkPlugins/group/bridge/NetCatPod 12.3
350 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.63
351 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.36
352 TestNetworkPlugins/group/bridge/DNS 0.23
353 TestNetworkPlugins/group/bridge/Localhost 0.17
354 TestNetworkPlugins/group/bridge/HairPin 0.18
355 TestNetworkPlugins/group/enable-default-cni/DNS 0.27
356 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
357 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
359 TestStartStop/group/old-k8s-version/serial/FirstStart 156.13
361 TestStartStop/group/no-preload/serial/FirstStart 85.48
362 TestStartStop/group/no-preload/serial/DeployApp 9.39
363 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.09
364 TestStartStop/group/no-preload/serial/Stop 10.92
365 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
366 TestStartStop/group/no-preload/serial/SecondStart 269.09
367 TestStartStop/group/old-k8s-version/serial/DeployApp 9.55
368 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.07
369 TestStartStop/group/old-k8s-version/serial/Stop 11.08
370 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
371 TestStartStop/group/old-k8s-version/serial/SecondStart 141.67
372 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
373 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.12
374 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
375 TestStartStop/group/old-k8s-version/serial/Pause 2.84
377 TestStartStop/group/embed-certs/serial/FirstStart 77.91
378 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
379 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
380 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
381 TestStartStop/group/no-preload/serial/Pause 2.97
383 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 74.72
384 TestStartStop/group/embed-certs/serial/DeployApp 10.47
385 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.22
386 TestStartStop/group/embed-certs/serial/Stop 10.88
387 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
388 TestStartStop/group/embed-certs/serial/SecondStart 266.84
389 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.38
390 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.09
391 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.96
392 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
393 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 268.42
394 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
395 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.11
396 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.27
397 TestStartStop/group/embed-certs/serial/Pause 3.05
399 TestStartStop/group/newest-cni/serial/FirstStart 43.21
400 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
401 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
402 TestStartStop/group/newest-cni/serial/DeployApp 0
403 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.15
404 TestStartStop/group/newest-cni/serial/Stop 11.09
405 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
406 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.01
407 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
408 TestStartStop/group/newest-cni/serial/SecondStart 17.88
409 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
410 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
411 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
412 TestStartStop/group/newest-cni/serial/Pause 3.04
x
+
TestDownloadOnly/v1.20.0/json-events (7.64s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-518803 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-518803 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (7.638536611s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (7.64s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-518803
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-518803: exit status 85 (68.197007ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-518803 | jenkins | v1.34.0 | 13 Sep 24 18:20 UTC |          |
	|         | -p download-only-518803        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 18:20:43
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 18:20:43.105645    7571 out.go:345] Setting OutFile to fd 1 ...
	I0913 18:20:43.105842    7571 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:20:43.105873    7571 out.go:358] Setting ErrFile to fd 2...
	I0913 18:20:43.105894    7571 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:20:43.106200    7571 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-2205/.minikube/bin
	W0913 18:20:43.106384    7571 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19636-2205/.minikube/config/config.json: open /home/jenkins/minikube-integration/19636-2205/.minikube/config/config.json: no such file or directory
	I0913 18:20:43.106816    7571 out.go:352] Setting JSON to true
	I0913 18:20:43.107629    7571 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":190,"bootTime":1726251453,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0913 18:20:43.107733    7571 start.go:139] virtualization:  
	I0913 18:20:43.110753    7571 out.go:97] [download-only-518803] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W0913 18:20:43.110924    7571 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19636-2205/.minikube/cache/preloaded-tarball: no such file or directory
	I0913 18:20:43.110976    7571 notify.go:220] Checking for updates...
	I0913 18:20:43.113053    7571 out.go:169] MINIKUBE_LOCATION=19636
	I0913 18:20:43.115109    7571 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 18:20:43.116941    7571 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19636-2205/kubeconfig
	I0913 18:20:43.118704    7571 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-2205/.minikube
	I0913 18:20:43.120664    7571 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0913 18:20:43.124470    7571 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0913 18:20:43.124829    7571 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 18:20:43.144690    7571 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0913 18:20:43.144818    7571 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0913 18:20:43.498674    7571 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-13 18:20:43.488787773 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0913 18:20:43.498789    7571 docker.go:318] overlay module found
	I0913 18:20:43.502873    7571 out.go:97] Using the docker driver based on user configuration
	I0913 18:20:43.502908    7571 start.go:297] selected driver: docker
	I0913 18:20:43.502916    7571 start.go:901] validating driver "docker" against <nil>
	I0913 18:20:43.503036    7571 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0913 18:20:43.556663    7571 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-13 18:20:43.546991772 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0913 18:20:43.556865    7571 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 18:20:43.557166    7571 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0913 18:20:43.557340    7571 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0913 18:20:43.560016    7571 out.go:169] Using Docker driver with root privileges
	I0913 18:20:43.562115    7571 cni.go:84] Creating CNI manager for ""
	I0913 18:20:43.562187    7571 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0913 18:20:43.562275    7571 start.go:340] cluster config:
	{Name:download-only-518803 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-518803 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 18:20:43.565069    7571 out.go:97] Starting "download-only-518803" primary control-plane node in "download-only-518803" cluster
	I0913 18:20:43.565098    7571 cache.go:121] Beginning downloading kic base image for docker with docker
	I0913 18:20:43.568207    7571 out.go:97] Pulling base image v0.0.45-1726193793-19634 ...
	I0913 18:20:43.568251    7571 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0913 18:20:43.568417    7571 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e in local docker daemon
	I0913 18:20:43.584252    7571 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e to local cache
	I0913 18:20:43.584419    7571 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e in local cache directory
	I0913 18:20:43.584520    7571 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e to local cache
	I0913 18:20:43.640429    7571 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0913 18:20:43.640472    7571 cache.go:56] Caching tarball of preloaded images
	I0913 18:20:43.640654    7571 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0913 18:20:43.643316    7571 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0913 18:20:43.643345    7571 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0913 18:20:43.742037    7571 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /home/jenkins/minikube-integration/19636-2205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-518803 host does not exist
	  To start a cluster, run: "minikube start -p download-only-518803"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-518803
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (8.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-650419 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-650419 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker: (8.211742655s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (8.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-650419
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-650419: exit status 85 (73.965434ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-518803 | jenkins | v1.34.0 | 13 Sep 24 18:20 UTC |                     |
	|         | -p download-only-518803        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 13 Sep 24 18:20 UTC | 13 Sep 24 18:20 UTC |
	| delete  | -p download-only-518803        | download-only-518803 | jenkins | v1.34.0 | 13 Sep 24 18:20 UTC | 13 Sep 24 18:20 UTC |
	| start   | -o=json --download-only        | download-only-650419 | jenkins | v1.34.0 | 13 Sep 24 18:20 UTC |                     |
	|         | -p download-only-650419        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 18:20:51
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 18:20:51.155104    7767 out.go:345] Setting OutFile to fd 1 ...
	I0913 18:20:51.155241    7767 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:20:51.155252    7767 out.go:358] Setting ErrFile to fd 2...
	I0913 18:20:51.155257    7767 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:20:51.155493    7767 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-2205/.minikube/bin
	I0913 18:20:51.155940    7767 out.go:352] Setting JSON to true
	I0913 18:20:51.156671    7767 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":199,"bootTime":1726251453,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0913 18:20:51.156749    7767 start.go:139] virtualization:  
	I0913 18:20:51.159568    7767 out.go:97] [download-only-650419] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0913 18:20:51.159886    7767 notify.go:220] Checking for updates...
	I0913 18:20:51.163176    7767 out.go:169] MINIKUBE_LOCATION=19636
	I0913 18:20:51.167399    7767 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 18:20:51.170148    7767 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19636-2205/kubeconfig
	I0913 18:20:51.172424    7767 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-2205/.minikube
	I0913 18:20:51.174349    7767 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0913 18:20:51.179697    7767 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0913 18:20:51.180068    7767 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 18:20:51.204710    7767 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0913 18:20:51.204825    7767 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0913 18:20:51.264494    7767 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-13 18:20:51.25467314 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0913 18:20:51.264621    7767 docker.go:318] overlay module found
	I0913 18:20:51.266742    7767 out.go:97] Using the docker driver based on user configuration
	I0913 18:20:51.266794    7767 start.go:297] selected driver: docker
	I0913 18:20:51.266809    7767 start.go:901] validating driver "docker" against <nil>
	I0913 18:20:51.266935    7767 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0913 18:20:51.322551    7767 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-13 18:20:51.313437287 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0913 18:20:51.322767    7767 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 18:20:51.323062    7767 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0913 18:20:51.323220    7767 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0913 18:20:51.326097    7767 out.go:169] Using Docker driver with root privileges
	I0913 18:20:51.328764    7767 cni.go:84] Creating CNI manager for ""
	I0913 18:20:51.328833    7767 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 18:20:51.328842    7767 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0913 18:20:51.328935    7767 start.go:340] cluster config:
	{Name:download-only-650419 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-650419 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 18:20:51.331305    7767 out.go:97] Starting "download-only-650419" primary control-plane node in "download-only-650419" cluster
	I0913 18:20:51.331330    7767 cache.go:121] Beginning downloading kic base image for docker with docker
	I0913 18:20:51.333814    7767 out.go:97] Pulling base image v0.0.45-1726193793-19634 ...
	I0913 18:20:51.333837    7767 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 18:20:51.333997    7767 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e in local docker daemon
	I0913 18:20:51.350338    7767 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e to local cache
	I0913 18:20:51.350479    7767 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e in local cache directory
	I0913 18:20:51.350502    7767 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e in local cache directory, skipping pull
	I0913 18:20:51.350515    7767 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e exists in cache, skipping pull
	I0913 18:20:51.350524    7767 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e as a tarball
	I0913 18:20:51.402200    7767 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0913 18:20:51.402229    7767 cache.go:56] Caching tarball of preloaded images
	I0913 18:20:51.402406    7767 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 18:20:51.405016    7767 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0913 18:20:51.405045    7767 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0913 18:20:51.490682    7767 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4?checksum=md5:402f69b5e09ccb1e1dbe401b4cdd104d -> /home/jenkins/minikube-integration/19636-2205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-650419 host does not exist
	  To start a cluster, run: "minikube start -p download-only-650419"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-650419
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-528081 --alsologtostderr --binary-mirror http://127.0.0.1:46241 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-528081" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-528081
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestOffline (84.42s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-arm64 start -p offline-docker-180568 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-arm64 start -p offline-docker-180568 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m21.963426354s)
helpers_test.go:175: Cleaning up "offline-docker-180568" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p offline-docker-180568
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p offline-docker-180568: (2.46022402s)
--- PASS: TestOffline (84.42s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-751971
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-751971: exit status 85 (65.336086ms)

                                                
                                                
-- stdout --
	* Profile "addons-751971" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-751971"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-751971
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-751971: exit status 85 (81.037433ms)

                                                
                                                
-- stdout --
	* Profile "addons-751971" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-751971"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (222.23s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-751971 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-751971 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns: (3m42.23253778s)
--- PASS: TestAddons/Setup (222.23s)

                                                
                                    
x
+
TestAddons/serial/Volcano (41.69s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:835: volcano-scheduler stabilized in 24.22694ms
addons_test.go:843: volcano-admission stabilized in 25.007154ms
addons_test.go:851: volcano-controller stabilized in 25.341376ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-nswlr" [74a0f808-1b79-43e5-99de-17369d287b8a] Running
addons_test.go:857: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003872545s
addons_test.go:861: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-q7ttd" [c5cfb335-b786-4800-a445-9fa58ff2f5d8] Running
addons_test.go:861: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.008722649s
addons_test.go:865: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-n6mz4" [04fd71cb-fe25-4b17-b41b-e50dd8e5de76] Running
addons_test.go:865: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003321722s
addons_test.go:870: (dbg) Run:  kubectl --context addons-751971 delete -n volcano-system job volcano-admission-init
addons_test.go:876: (dbg) Run:  kubectl --context addons-751971 create -f testdata/vcjob.yaml
addons_test.go:884: (dbg) Run:  kubectl --context addons-751971 get vcjob -n my-volcano
addons_test.go:902: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [2e2b4060-36e0-4411-9e74-68a7e5d1563f] Pending
helpers_test.go:344: "test-job-nginx-0" [2e2b4060-36e0-4411-9e74-68a7e5d1563f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [2e2b4060-36e0-4411-9e74-68a7e5d1563f] Running
addons_test.go:902: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.003565512s
addons_test.go:906: (dbg) Run:  out/minikube-linux-arm64 -p addons-751971 addons disable volcano --alsologtostderr -v=1
addons_test.go:906: (dbg) Done: out/minikube-linux-arm64 -p addons-751971 addons disable volcano --alsologtostderr -v=1: (10.705064085s)
--- PASS: TestAddons/serial/Volcano (41.69s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-751971 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-751971 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.52s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-751971 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-751971 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-751971 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [ecfde441-fffb-4a21-9b83-a196e33e6bf2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [ecfde441-fffb-4a21-9b83-a196e33e6bf2] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.004452049s
addons_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p addons-751971 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: (dbg) Run:  kubectl --context addons-751971 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-linux-arm64 -p addons-751971 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p addons-751971 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:304: (dbg) Done: out/minikube-linux-arm64 -p addons-751971 addons disable ingress-dns --alsologtostderr -v=1: (1.096041217s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-arm64 -p addons-751971 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-linux-arm64 -p addons-751971 addons disable ingress --alsologtostderr -v=1: (7.730013291s)
--- PASS: TestAddons/parallel/Ingress (18.52s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.75s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-rrzsl" [f86babd5-6c92-4567-8829-fe55d3a566cd] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.005189176s
addons_test.go:789: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-751971
addons_test.go:789: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-751971: (5.74867372s)
--- PASS: TestAddons/parallel/InspektorGadget (10.75s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.72s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 2.479315ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-b2vjl" [dd32f9dc-f42f-4f79-b76e-2b8a1e76dbee] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005021815s
addons_test.go:413: (dbg) Run:  kubectl --context addons-751971 top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p addons-751971 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.72s)

                                                
                                    
x
+
TestAddons/parallel/CSI (52.42s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:505: csi-hostpath-driver pods stabilized in 6.941433ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-751971 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-751971 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-751971 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-751971 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-751971 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-751971 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-751971 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-751971 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-751971 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-751971 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [9396e9f0-4ec0-4987-baa6-4a860c0c9ae6] Pending
helpers_test.go:344: "task-pv-pod" [9396e9f0-4ec0-4987-baa6-4a860c0c9ae6] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [9396e9f0-4ec0-4987-baa6-4a860c0c9ae6] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.00303714s
addons_test.go:528: (dbg) Run:  kubectl --context addons-751971 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-751971 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-751971 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-751971 delete pod task-pv-pod
addons_test.go:538: (dbg) Done: kubectl --context addons-751971 delete pod task-pv-pod: (1.365846914s)
addons_test.go:544: (dbg) Run:  kubectl --context addons-751971 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-751971 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-751971 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-751971 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-751971 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-751971 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-751971 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-751971 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-751971 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-751971 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-751971 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-751971 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-751971 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-751971 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-751971 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-751971 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-751971 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-751971 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-751971 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-751971 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-751971 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [0cd5e174-6cbf-4562-bbee-caa0b8d5ef0a] Pending
helpers_test.go:344: "task-pv-pod-restore" [0cd5e174-6cbf-4562-bbee-caa0b8d5ef0a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [0cd5e174-6cbf-4562-bbee-caa0b8d5ef0a] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004105485s
addons_test.go:570: (dbg) Run:  kubectl --context addons-751971 delete pod task-pv-pod-restore
addons_test.go:574: (dbg) Run:  kubectl --context addons-751971 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-751971 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-arm64 -p addons-751971 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-arm64 -p addons-751971 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.70725097s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-arm64 -p addons-751971 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (52.42s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-751971 --alsologtostderr -v=1
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-4gzkd" [16315393-09e6-47d6-b202-44f3f11c4b8d] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-4gzkd" [16315393-09e6-47d6-b202-44f3f11c4b8d] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-4gzkd" [16315393-09e6-47d6-b202-44f3f11c4b8d] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003832042s
addons_test.go:777: (dbg) Run:  out/minikube-linux-arm64 -p addons-751971 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-linux-arm64 -p addons-751971 addons disable headlamp --alsologtostderr -v=1: (5.691921276s)
--- PASS: TestAddons/parallel/Headlamp (17.65s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.72s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-w9b87" [e64b948d-5208-4b19-8f08-ba2b9f4b1bd1] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004924025s
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-751971
--- PASS: TestAddons/parallel/CloudSpanner (5.72s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.69s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-751971 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-751971 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-751971 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-751971 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-751971 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-751971 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-751971 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-751971 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [fe2ea4c2-ba81-4175-82bc-02d21e0189e9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [fe2ea4c2-ba81-4175-82bc-02d21e0189e9] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [fe2ea4c2-ba81-4175-82bc-02d21e0189e9] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.004411538s
addons_test.go:938: (dbg) Run:  kubectl --context addons-751971 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-linux-arm64 -p addons-751971 ssh "cat /opt/local-path-provisioner/pvc-f65c88c7-360e-4112-b64c-a202b9b629b8_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-751971 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-751971 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-linux-arm64 -p addons-751971 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:967: (dbg) Done: out/minikube-linux-arm64 -p addons-751971 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.393922653s)
--- PASS: TestAddons/parallel/LocalPath (52.69s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.48s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-8dklz" [4383ec80-7943-4b54-a0ff-b49159c7adc4] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003655947s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-751971
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.48s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.8s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-bsr4b" [6f4a106f-3576-46a9-a341-097d390cdff4] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003867662s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-arm64 -p addons-751971 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-arm64 -p addons-751971 addons disable yakd --alsologtostderr -v=1: (5.794288614s)
--- PASS: TestAddons/parallel/Yakd (11.80s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (6.05s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-751971
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-751971: (5.791248851s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-751971
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-751971
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-751971
--- PASS: TestAddons/StoppedEnableDisable (6.05s)

                                                
                                    
x
+
TestCertOptions (37.87s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-000733 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
E0913 19:14:27.366022    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/functional-109833/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-000733 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (35.015007803s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-000733 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-000733 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-000733 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-000733" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-000733
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-000733: (2.16005654s)
--- PASS: TestCertOptions (37.87s)

                                                
                                    
x
+
TestCertExpiration (254.7s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-779287 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-779287 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (40.080459263s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-779287 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-779287 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (32.003248144s)
helpers_test.go:175: Cleaning up "cert-expiration-779287" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-779287
E0913 19:17:30.430950    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/functional-109833/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-779287: (2.618915878s)
--- PASS: TestCertExpiration (254.70s)

                                                
                                    
x
+
TestDockerFlags (47.47s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-arm64 start -p docker-flags-082930 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-arm64 start -p docker-flags-082930 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (44.499160856s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-082930 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-082930 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-082930" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-flags-082930
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-flags-082930: (2.06372545s)
--- PASS: TestDockerFlags (47.47s)

                                                
                                    
x
+
TestForceSystemdFlag (36.73s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-394160 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0913 19:12:46.838392    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/addons-751971/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-394160 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (33.857608078s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-394160 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-394160" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-394160
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-394160: (2.407646923s)
--- PASS: TestForceSystemdFlag (36.73s)

                                                
                                    
x
+
TestForceSystemdEnv (43.35s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-600252 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-600252 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (40.557936321s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-600252 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-600252" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-600252
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-600252: (2.238282494s)
--- PASS: TestForceSystemdEnv (43.35s)

                                                
                                    
x
+
TestErrorSpam/setup (33.4s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-545915 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-545915 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-545915 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-545915 --driver=docker  --container-runtime=docker: (33.395074103s)
--- PASS: TestErrorSpam/setup (33.40s)

                                                
                                    
x
+
TestErrorSpam/start (0.74s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-545915 --log_dir /tmp/nospam-545915 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-545915 --log_dir /tmp/nospam-545915 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-545915 --log_dir /tmp/nospam-545915 start --dry-run
--- PASS: TestErrorSpam/start (0.74s)

                                                
                                    
x
+
TestErrorSpam/status (1.18s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-545915 --log_dir /tmp/nospam-545915 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-545915 --log_dir /tmp/nospam-545915 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-545915 --log_dir /tmp/nospam-545915 status
--- PASS: TestErrorSpam/status (1.18s)

                                                
                                    
x
+
TestErrorSpam/pause (1.5s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-545915 --log_dir /tmp/nospam-545915 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-545915 --log_dir /tmp/nospam-545915 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-545915 --log_dir /tmp/nospam-545915 pause
--- PASS: TestErrorSpam/pause (1.50s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.4s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-545915 --log_dir /tmp/nospam-545915 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-545915 --log_dir /tmp/nospam-545915 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-545915 --log_dir /tmp/nospam-545915 unpause
--- PASS: TestErrorSpam/unpause (1.40s)

                                                
                                    
x
+
TestErrorSpam/stop (2.08s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-545915 --log_dir /tmp/nospam-545915 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-545915 --log_dir /tmp/nospam-545915 stop: (1.878005271s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-545915 --log_dir /tmp/nospam-545915 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-545915 --log_dir /tmp/nospam-545915 stop
--- PASS: TestErrorSpam/stop (2.08s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19636-2205/.minikube/files/etc/test/nested/copy/7564/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (76.98s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-109833 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-109833 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m16.983770384s)
--- PASS: TestFunctional/serial/StartWithProxy (76.98s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (35.66s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-109833 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-109833 --alsologtostderr -v=8: (35.663088424s)
functional_test.go:663: soft start took 35.663585947s for "functional-109833" cluster.
--- PASS: TestFunctional/serial/SoftStart (35.66s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-109833 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-109833 cache add registry.k8s.io/pause:3.1: (1.100800291s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-109833 cache add registry.k8s.io/pause:3.3: (1.097881014s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.99s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-109833 /tmp/TestFunctionalserialCacheCmdcacheadd_local3130998532/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 cache add minikube-local-cache-test:functional-109833
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 cache delete minikube-local-cache-test:functional-109833
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-109833
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.99s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.59s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-109833 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (295.773626ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.59s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 kubectl -- --context functional-109833 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-109833 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (45.58s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-109833 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-109833 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (45.576009739s)
functional_test.go:761: restart took 45.576119811s for "functional-109833" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (45.58s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-109833 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.12s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-109833 logs: (1.118963634s)
--- PASS: TestFunctional/serial/LogsCmd (1.12s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 logs --file /tmp/TestFunctionalserialLogsFileCmd485778439/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-109833 logs --file /tmp/TestFunctionalserialLogsFileCmd485778439/001/logs.txt: (1.172343214s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.17s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.84s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-109833 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-109833
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-109833: exit status 115 (375.403814ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30809 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-109833 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-109833 delete -f testdata/invalidsvc.yaml: (1.184611299s)
--- PASS: TestFunctional/serial/InvalidService (4.84s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-109833 config get cpus: exit status 14 (64.739046ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-109833 config get cpus: exit status 14 (68.448312ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-109833 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-109833 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 49309: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.13s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-109833 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-109833 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (239.481974ms)

                                                
                                                
-- stdout --
	* [functional-109833] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19636-2205/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-2205/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 18:39:59.557417   48969 out.go:345] Setting OutFile to fd 1 ...
	I0913 18:39:59.562396   48969 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:39:59.562449   48969 out.go:358] Setting ErrFile to fd 2...
	I0913 18:39:59.562471   48969 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:39:59.562930   48969 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-2205/.minikube/bin
	I0913 18:39:59.563509   48969 out.go:352] Setting JSON to false
	I0913 18:39:59.564660   48969 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1347,"bootTime":1726251453,"procs":227,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0913 18:39:59.564768   48969 start.go:139] virtualization:  
	I0913 18:39:59.567274   48969 out.go:177] * [functional-109833] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0913 18:39:59.574483   48969 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 18:39:59.574553   48969 notify.go:220] Checking for updates...
	I0913 18:39:59.578425   48969 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 18:39:59.580464   48969 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19636-2205/kubeconfig
	I0913 18:39:59.582296   48969 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-2205/.minikube
	I0913 18:39:59.584267   48969 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0913 18:39:59.586021   48969 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 18:39:59.588423   48969 config.go:182] Loaded profile config "functional-109833": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 18:39:59.589032   48969 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 18:39:59.615225   48969 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0913 18:39:59.615347   48969 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0913 18:39:59.693915   48969 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-13 18:39:59.683420293 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0913 18:39:59.694033   48969 docker.go:318] overlay module found
	I0913 18:39:59.697449   48969 out.go:177] * Using the docker driver based on existing profile
	I0913 18:39:59.699192   48969 start.go:297] selected driver: docker
	I0913 18:39:59.699222   48969 start.go:901] validating driver "docker" against &{Name:functional-109833 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-109833 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 18:39:59.699348   48969 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 18:39:59.701563   48969 out.go:201] 
	W0913 18:39:59.703696   48969 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0913 18:39:59.705852   48969 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-109833 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-109833 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-109833 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (262.56146ms)

                                                
                                                
-- stdout --
	* [functional-109833] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19636-2205/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-2205/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 18:39:59.284672   48913 out.go:345] Setting OutFile to fd 1 ...
	I0913 18:39:59.284812   48913 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:39:59.284825   48913 out.go:358] Setting ErrFile to fd 2...
	I0913 18:39:59.284832   48913 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:39:59.285220   48913 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-2205/.minikube/bin
	I0913 18:39:59.285651   48913 out.go:352] Setting JSON to false
	I0913 18:39:59.286630   48913 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1347,"bootTime":1726251453,"procs":229,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0913 18:39:59.286707   48913 start.go:139] virtualization:  
	I0913 18:39:59.289870   48913 out.go:177] * [functional-109833] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I0913 18:39:59.292219   48913 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 18:39:59.292347   48913 notify.go:220] Checking for updates...
	I0913 18:39:59.296497   48913 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 18:39:59.298764   48913 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19636-2205/kubeconfig
	I0913 18:39:59.302248   48913 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-2205/.minikube
	I0913 18:39:59.304904   48913 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0913 18:39:59.307797   48913 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 18:39:59.310541   48913 config.go:182] Loaded profile config "functional-109833": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 18:39:59.311115   48913 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 18:39:59.367788   48913 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0913 18:39:59.367904   48913 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0913 18:39:59.453590   48913 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-13 18:39:59.442127681 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0913 18:39:59.453708   48913 docker.go:318] overlay module found
	I0913 18:39:59.457458   48913 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0913 18:39:59.459747   48913 start.go:297] selected driver: docker
	I0913 18:39:59.459771   48913 start.go:901] validating driver "docker" against &{Name:functional-109833 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-109833 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 18:39:59.459891   48913 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 18:39:59.462729   48913 out.go:201] 
	W0913 18:39:59.464975   48913 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0913 18:39:59.466607   48913 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-109833 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-109833 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-4nxxp" [ef44231e-1bc2-4907-b14c-0050c313ed98] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-4nxxp" [ef44231e-1bc2-4907-b14c-0050c313ed98] Running
E0913 18:39:43.772803    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/addons-751971/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:39:43.779674    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/addons-751971/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:39:43.791209    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/addons-751971/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:39:43.812738    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/addons-751971/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:39:43.854271    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/addons-751971/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:39:43.935816    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/addons-751971/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:39:44.097403    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/addons-751971/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:39:44.419142    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/addons-751971/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:39:45.061325    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/addons-751971/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:39:46.342762    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/addons-751971/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.003890503s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31546
functional_test.go:1675: http://192.168.49.2:31546: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-4nxxp

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31546
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.63s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [e91b152b-f2be-423f-8b5e-d1dfd81abfdc] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004390666s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-109833 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-109833 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-109833 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-109833 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [7ab56b75-5f6a-4a0b-bcc8-435f7dc47af6] Pending
helpers_test.go:344: "sp-pod" [7ab56b75-5f6a-4a0b-bcc8-435f7dc47af6] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [7ab56b75-5f6a-4a0b-bcc8-435f7dc47af6] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.003529012s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-109833 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-109833 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-109833 delete -f testdata/storage-provisioner/pod.yaml: (1.378083591s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-109833 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [dbaa1b3e-3b94-4ef8-8f3b-62a5c7e5d161] Pending
E0913 18:39:48.904217    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/addons-751971/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "sp-pod" [dbaa1b3e-3b94-4ef8-8f3b-62a5c7e5d161] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [dbaa1b3e-3b94-4ef8-8f3b-62a5c7e5d161] Running
E0913 18:39:54.026437    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/addons-751971/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.007048003s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-109833 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.48s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 ssh -n functional-109833 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 cp functional-109833:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1173313374/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 ssh -n functional-109833 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 ssh -n functional-109833 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.46s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/7564/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 ssh "sudo cat /etc/test/nested/copy/7564/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/7564.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 ssh "sudo cat /etc/ssl/certs/7564.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/7564.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 ssh "sudo cat /usr/share/ca-certificates/7564.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/75642.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 ssh "sudo cat /etc/ssl/certs/75642.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/75642.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 ssh "sudo cat /usr/share/ca-certificates/75642.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.19s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-109833 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-109833 ssh "sudo systemctl is-active crio": exit status 1 (352.234783ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-109833 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-109833 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-109833 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-109833 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 46145: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-109833 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-109833 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [7f6e1dc6-d140-41ef-899a-a409fe33c551] Pending
helpers_test.go:344: "nginx-svc" [7f6e1dc6-d140-41ef-899a-a409fe33c551] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [7f6e1dc6-d140-41ef-899a-a409fe33c551] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.01647579s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.47s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-109833 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.105.125.191 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-109833 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-109833 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-109833 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-v7mb8" [b4a778df-296b-4b48-aa29-96dead841c11] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-v7mb8" [b4a778df-296b-4b48-aa29-96dead841c11] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.004073055s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 service list -o json
functional_test.go:1494: Took "673.549823ms" to run "out/minikube-linux-arm64 -p functional-109833 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "455.672894ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "71.669268ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:31257
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "439.093999ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "68.384542ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-109833 /tmp/TestFunctionalparallelMountCmdany-port1793874756/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726252797105504010" to /tmp/TestFunctionalparallelMountCmdany-port1793874756/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726252797105504010" to /tmp/TestFunctionalparallelMountCmdany-port1793874756/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726252797105504010" to /tmp/TestFunctionalparallelMountCmdany-port1793874756/001/test-1726252797105504010
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-109833 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (483.534966ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 13 18:39 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 13 18:39 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 13 18:39 test-1726252797105504010
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 ssh cat /mount-9p/test-1726252797105504010
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-109833 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [e3636886-0e1d-414a-854c-77ebc130bc23] Pending
helpers_test.go:344: "busybox-mount" [e3636886-0e1d-414a-854c-77ebc130bc23] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
E0913 18:40:04.268633    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/addons-751971/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox-mount" [e3636886-0e1d-414a-854c-77ebc130bc23] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [e3636886-0e1d-414a-854c-77ebc130bc23] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.010649079s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-109833 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-109833 /tmp/TestFunctionalparallelMountCmdany-port1793874756/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.80s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31257
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-109833 /tmp/TestFunctionalparallelMountCmdspecific-port2977409510/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-109833 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (481.886401ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-109833 /tmp/TestFunctionalparallelMountCmdspecific-port2977409510/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-109833 ssh "sudo umount -f /mount-9p": exit status 1 (340.862639ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-109833 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-109833 /tmp/TestFunctionalparallelMountCmdspecific-port2977409510/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.10s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-109833 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1465547137/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-109833 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1465547137/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-109833 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1465547137/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-109833 ssh "findmnt -T" /mount1: exit status 1 (958.843517ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-109833 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-109833 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1465547137/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-109833 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1465547137/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-109833 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1465547137/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.55s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-109833 version -o=json --components: (1.099709958s)
--- PASS: TestFunctional/parallel/Version/components (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-109833 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-109833
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-109833
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-109833 image ls --format short --alsologtostderr:
I0913 18:40:19.016078   52217 out.go:345] Setting OutFile to fd 1 ...
I0913 18:40:19.016301   52217 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 18:40:19.016313   52217 out.go:358] Setting ErrFile to fd 2...
I0913 18:40:19.016319   52217 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 18:40:19.016635   52217 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-2205/.minikube/bin
I0913 18:40:19.017360   52217 config.go:182] Loaded profile config "functional-109833": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 18:40:19.017527   52217 config.go:182] Loaded profile config "functional-109833": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 18:40:19.018108   52217 cli_runner.go:164] Run: docker container inspect functional-109833 --format={{.State.Status}}
I0913 18:40:19.039099   52217 ssh_runner.go:195] Run: systemctl --version
I0913 18:40:19.039149   52217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-109833
I0913 18:40:19.072611   52217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19636-2205/.minikube/machines/functional-109833/id_rsa Username:docker}
I0913 18:40:19.173117   52217 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-109833 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 279f381cb3736 | 85.9MB |
| registry.k8s.io/coredns/coredns             | v1.11.3           | 2f6c962e7b831 | 60.2MB |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| docker.io/library/minikube-local-cache-test | functional-109833 | f9820686bdb88 | 30B    |
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| docker.io/kicbase/echo-server               | functional-109833 | ce2d2cda2d858 | 4.78MB |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/kube-apiserver              | v1.31.1           | d3f53a98c0a9d | 91.6MB |
| docker.io/library/nginx                     | latest            | 195245f0c7927 | 193MB  |
| docker.io/library/nginx                     | alpine            | b887aca7aed61 | 47MB   |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 7f8aa378bb47d | 66MB   |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 24a140c548c07 | 94.7MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-109833 image ls --format table --alsologtostderr:
I0913 18:40:19.317691   52286 out.go:345] Setting OutFile to fd 1 ...
I0913 18:40:19.317873   52286 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 18:40:19.317887   52286 out.go:358] Setting ErrFile to fd 2...
I0913 18:40:19.317894   52286 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 18:40:19.318252   52286 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-2205/.minikube/bin
I0913 18:40:19.318957   52286 config.go:182] Loaded profile config "functional-109833": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 18:40:19.319100   52286 config.go:182] Loaded profile config "functional-109833": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 18:40:19.319570   52286 cli_runner.go:164] Run: docker container inspect functional-109833 --format={{.State.Status}}
I0913 18:40:19.342678   52286 ssh_runner.go:195] Run: systemctl --version
I0913 18:40:19.342746   52286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-109833
I0913 18:40:19.370550   52286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19636-2205/.minikube/machines/functional-109833/id_rsa Username:docker}
I0913 18:40:19.472281   52286 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-109833 image ls --format json --alsologtostderr:
[{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"66000000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-109833"],"size":"4780000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c0
2d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"f9820686bdb88f39a7d64b5f17e3dfb2d5ca1ec28f4cd112e8edb7d8b69ab8ea","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-109833"],"size":"30"},{"id":"d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"91600000"},{"id":"b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47000000"},{"id":"195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigest
s":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"94700000"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"85900000"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"60200000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"
size":"240000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-109833 image ls --format json --alsologtostderr:
I0913 18:40:19.283307   52282 out.go:345] Setting OutFile to fd 1 ...
I0913 18:40:19.283415   52282 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 18:40:19.283425   52282 out.go:358] Setting ErrFile to fd 2...
I0913 18:40:19.283430   52282 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 18:40:19.283727   52282 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-2205/.minikube/bin
I0913 18:40:19.284384   52282 config.go:182] Loaded profile config "functional-109833": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 18:40:19.284513   52282 config.go:182] Loaded profile config "functional-109833": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 18:40:19.284994   52282 cli_runner.go:164] Run: docker container inspect functional-109833 --format={{.State.Status}}
I0913 18:40:19.313769   52282 ssh_runner.go:195] Run: systemctl --version
I0913 18:40:19.313825   52282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-109833
I0913 18:40:19.332063   52282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19636-2205/.minikube/machines/functional-109833/id_rsa Username:docker}
I0913 18:40:19.430595   52282 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-109833 image ls --format yaml --alsologtostderr:
- id: f9820686bdb88f39a7d64b5f17e3dfb2d5ca1ec28f4cd112e8edb7d8b69ab8ea
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-109833
size: "30"
- id: 279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "85900000"
- id: 24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "94700000"
- id: b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "91600000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-109833
size: "4780000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "60200000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "66000000"
- id: 195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-109833 image ls --format yaml --alsologtostderr:
I0913 18:40:19.028651   52218 out.go:345] Setting OutFile to fd 1 ...
I0913 18:40:19.028855   52218 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 18:40:19.028884   52218 out.go:358] Setting ErrFile to fd 2...
I0913 18:40:19.028910   52218 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 18:40:19.029223   52218 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-2205/.minikube/bin
I0913 18:40:19.029927   52218 config.go:182] Loaded profile config "functional-109833": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 18:40:19.030142   52218 config.go:182] Loaded profile config "functional-109833": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 18:40:19.030762   52218 cli_runner.go:164] Run: docker container inspect functional-109833 --format={{.State.Status}}
I0913 18:40:19.056966   52218 ssh_runner.go:195] Run: systemctl --version
I0913 18:40:19.057017   52218 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-109833
I0913 18:40:19.077370   52218 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19636-2205/.minikube/machines/functional-109833/id_rsa Username:docker}
I0913 18:40:19.178626   52218 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-109833 ssh pgrep buildkitd: exit status 1 (288.823484ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 image build -t localhost/my-image:functional-109833 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-109833 image build -t localhost/my-image:functional-109833 testdata/build --alsologtostderr: (2.68407216s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-109833 image build -t localhost/my-image:functional-109833 testdata/build --alsologtostderr:
I0913 18:40:19.795908   52404 out.go:345] Setting OutFile to fd 1 ...
I0913 18:40:19.796094   52404 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 18:40:19.796105   52404 out.go:358] Setting ErrFile to fd 2...
I0913 18:40:19.796110   52404 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 18:40:19.796366   52404 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-2205/.minikube/bin
I0913 18:40:19.797015   52404 config.go:182] Loaded profile config "functional-109833": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 18:40:19.797683   52404 config.go:182] Loaded profile config "functional-109833": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 18:40:19.798245   52404 cli_runner.go:164] Run: docker container inspect functional-109833 --format={{.State.Status}}
I0913 18:40:19.816844   52404 ssh_runner.go:195] Run: systemctl --version
I0913 18:40:19.816901   52404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-109833
I0913 18:40:19.836214   52404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19636-2205/.minikube/machines/functional-109833/id_rsa Username:docker}
I0913 18:40:19.934675   52404 build_images.go:161] Building image from path: /tmp/build.39945375.tar
I0913 18:40:19.934743   52404 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0913 18:40:19.944091   52404 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.39945375.tar
I0913 18:40:19.948007   52404 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.39945375.tar: stat -c "%s %y" /var/lib/minikube/build/build.39945375.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.39945375.tar': No such file or directory
I0913 18:40:19.948048   52404 ssh_runner.go:362] scp /tmp/build.39945375.tar --> /var/lib/minikube/build/build.39945375.tar (3072 bytes)
I0913 18:40:19.976203   52404 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.39945375
I0913 18:40:19.986033   52404 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.39945375 -xf /var/lib/minikube/build/build.39945375.tar
I0913 18:40:19.995802   52404 docker.go:360] Building image: /var/lib/minikube/build/build.39945375
I0913 18:40:19.995870   52404 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-109833 /var/lib/minikube/build/build.39945375
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.2s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:b091f9c79ad138d2e377a279d72c0fd0714310e3bd00f0f3e155221654678a92 done
#8 naming to localhost/my-image:functional-109833 done
#8 DONE 0.1s
I0913 18:40:22.405586   52404 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-109833 /var/lib/minikube/build/build.39945375: (2.409690037s)
I0913 18:40:22.405661   52404 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.39945375
I0913 18:40:22.415097   52404 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.39945375.tar
I0913 18:40:22.423756   52404 build_images.go:217] Built localhost/my-image:functional-109833 from /tmp/build.39945375.tar
I0913 18:40:22.423786   52404 build_images.go:133] succeeded building to: functional-109833
I0913 18:40:22.423792   52404 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
2024/09/13 18:40:12 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.012070879s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-109833
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 image load --daemon kicbase/echo-server:functional-109833 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-109833 docker-env) && out/minikube-linux-arm64 status -p functional-109833"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-109833 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 image load --daemon kicbase/echo-server:functional-109833 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-109833
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 image load --daemon kicbase/echo-server:functional-109833 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 image save kicbase/echo-server:functional-109833 /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 image rm kicbase/echo-server:functional-109833 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-109833
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-109833 image save --daemon kicbase/echo-server:functional-109833 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-109833
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.40s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-109833
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-109833
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-109833
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (127.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-867275 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0913 18:41:05.711523    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/addons-751971/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:42:27.633687    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/addons-751971/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-867275 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (2m6.440424392s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (127.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (44.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-867275 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-867275 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-867275 -- rollout status deployment/busybox: (5.440288244s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-867275 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-867275 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-867275 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-867275 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-867275 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-867275 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-867275 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-867275 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-867275 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-867275 -- exec busybox-7dff88458-58dcx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-867275 -- exec busybox-7dff88458-5lqdj -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-867275 -- exec busybox-7dff88458-ccwnr -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-867275 -- exec busybox-7dff88458-58dcx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-867275 -- exec busybox-7dff88458-5lqdj -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-867275 -- exec busybox-7dff88458-ccwnr -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-867275 -- exec busybox-7dff88458-58dcx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-867275 -- exec busybox-7dff88458-5lqdj -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-867275 -- exec busybox-7dff88458-ccwnr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (44.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-867275 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-867275 -- exec busybox-7dff88458-58dcx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-867275 -- exec busybox-7dff88458-58dcx -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-867275 -- exec busybox-7dff88458-5lqdj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-867275 -- exec busybox-7dff88458-5lqdj -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-867275 -- exec busybox-7dff88458-ccwnr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-867275 -- exec busybox-7dff88458-ccwnr -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (24.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-867275 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-867275 -v=7 --alsologtostderr: (23.089291953s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-867275 status -v=7 --alsologtostderr: (1.17667385s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (24.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-867275 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-867275 status --output json -v=7 --alsologtostderr: (1.069712248s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 cp testdata/cp-test.txt ha-867275:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 ssh -n ha-867275 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 cp ha-867275:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3633892203/001/cp-test_ha-867275.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 ssh -n ha-867275 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 cp ha-867275:/home/docker/cp-test.txt ha-867275-m02:/home/docker/cp-test_ha-867275_ha-867275-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 ssh -n ha-867275 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 ssh -n ha-867275-m02 "sudo cat /home/docker/cp-test_ha-867275_ha-867275-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 cp ha-867275:/home/docker/cp-test.txt ha-867275-m03:/home/docker/cp-test_ha-867275_ha-867275-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 ssh -n ha-867275 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 ssh -n ha-867275-m03 "sudo cat /home/docker/cp-test_ha-867275_ha-867275-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 cp ha-867275:/home/docker/cp-test.txt ha-867275-m04:/home/docker/cp-test_ha-867275_ha-867275-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 ssh -n ha-867275 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 ssh -n ha-867275-m04 "sudo cat /home/docker/cp-test_ha-867275_ha-867275-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 cp testdata/cp-test.txt ha-867275-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 ssh -n ha-867275-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 cp ha-867275-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3633892203/001/cp-test_ha-867275-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 ssh -n ha-867275-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 cp ha-867275-m02:/home/docker/cp-test.txt ha-867275:/home/docker/cp-test_ha-867275-m02_ha-867275.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 ssh -n ha-867275-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 ssh -n ha-867275 "sudo cat /home/docker/cp-test_ha-867275-m02_ha-867275.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 cp ha-867275-m02:/home/docker/cp-test.txt ha-867275-m03:/home/docker/cp-test_ha-867275-m02_ha-867275-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 ssh -n ha-867275-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 ssh -n ha-867275-m03 "sudo cat /home/docker/cp-test_ha-867275-m02_ha-867275-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 cp ha-867275-m02:/home/docker/cp-test.txt ha-867275-m04:/home/docker/cp-test_ha-867275-m02_ha-867275-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 ssh -n ha-867275-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 ssh -n ha-867275-m04 "sudo cat /home/docker/cp-test_ha-867275-m02_ha-867275-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 cp testdata/cp-test.txt ha-867275-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 ssh -n ha-867275-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 cp ha-867275-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3633892203/001/cp-test_ha-867275-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 ssh -n ha-867275-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 cp ha-867275-m03:/home/docker/cp-test.txt ha-867275:/home/docker/cp-test_ha-867275-m03_ha-867275.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 ssh -n ha-867275-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 ssh -n ha-867275 "sudo cat /home/docker/cp-test_ha-867275-m03_ha-867275.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 cp ha-867275-m03:/home/docker/cp-test.txt ha-867275-m02:/home/docker/cp-test_ha-867275-m03_ha-867275-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 ssh -n ha-867275-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 ssh -n ha-867275-m02 "sudo cat /home/docker/cp-test_ha-867275-m03_ha-867275-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 cp ha-867275-m03:/home/docker/cp-test.txt ha-867275-m04:/home/docker/cp-test_ha-867275-m03_ha-867275-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 ssh -n ha-867275-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 ssh -n ha-867275-m04 "sudo cat /home/docker/cp-test_ha-867275-m03_ha-867275-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 cp testdata/cp-test.txt ha-867275-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 ssh -n ha-867275-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 cp ha-867275-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3633892203/001/cp-test_ha-867275-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 ssh -n ha-867275-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 cp ha-867275-m04:/home/docker/cp-test.txt ha-867275:/home/docker/cp-test_ha-867275-m04_ha-867275.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 ssh -n ha-867275-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 ssh -n ha-867275 "sudo cat /home/docker/cp-test_ha-867275-m04_ha-867275.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 cp ha-867275-m04:/home/docker/cp-test.txt ha-867275-m02:/home/docker/cp-test_ha-867275-m04_ha-867275-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 ssh -n ha-867275-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 ssh -n ha-867275-m02 "sudo cat /home/docker/cp-test_ha-867275-m04_ha-867275-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 cp ha-867275-m04:/home/docker/cp-test.txt ha-867275-m03:/home/docker/cp-test_ha-867275-m04_ha-867275-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 ssh -n ha-867275-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 ssh -n ha-867275-m03 "sudo cat /home/docker/cp-test_ha-867275-m04_ha-867275-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-867275 node stop m02 -v=7 --alsologtostderr: (11.013484598s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-867275 status -v=7 --alsologtostderr: exit status 7 (780.050972ms)

                                                
                                                
-- stdout --
	ha-867275
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-867275-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-867275-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-867275-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 18:44:15.378786   74987 out.go:345] Setting OutFile to fd 1 ...
	I0913 18:44:15.378927   74987 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:44:15.378939   74987 out.go:358] Setting ErrFile to fd 2...
	I0913 18:44:15.378945   74987 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:44:15.379175   74987 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-2205/.minikube/bin
	I0913 18:44:15.379351   74987 out.go:352] Setting JSON to false
	I0913 18:44:15.379381   74987 mustload.go:65] Loading cluster: ha-867275
	I0913 18:44:15.379482   74987 notify.go:220] Checking for updates...
	I0913 18:44:15.379860   74987 config.go:182] Loaded profile config "ha-867275": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 18:44:15.379877   74987 status.go:255] checking status of ha-867275 ...
	I0913 18:44:15.380379   74987 cli_runner.go:164] Run: docker container inspect ha-867275 --format={{.State.Status}}
	I0913 18:44:15.396754   74987 status.go:330] ha-867275 host status = "Running" (err=<nil>)
	I0913 18:44:15.396775   74987 host.go:66] Checking if "ha-867275" exists ...
	I0913 18:44:15.397626   74987 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-867275
	I0913 18:44:15.414400   74987 host.go:66] Checking if "ha-867275" exists ...
	I0913 18:44:15.414701   74987 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 18:44:15.414743   74987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-867275
	I0913 18:44:15.445938   74987 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19636-2205/.minikube/machines/ha-867275/id_rsa Username:docker}
	I0913 18:44:15.547411   74987 ssh_runner.go:195] Run: systemctl --version
	I0913 18:44:15.552281   74987 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 18:44:15.571080   74987 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0913 18:44:15.648471   74987 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:71 SystemTime:2024-09-13 18:44:15.635819793 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0913 18:44:15.649052   74987 kubeconfig.go:125] found "ha-867275" server: "https://192.168.49.254:8443"
	I0913 18:44:15.649082   74987 api_server.go:166] Checking apiserver status ...
	I0913 18:44:15.649125   74987 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 18:44:15.661940   74987 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2254/cgroup
	I0913 18:44:15.672596   74987 api_server.go:182] apiserver freezer: "5:freezer:/docker/cc8de98fc0461e18af65f5f40cc8739a3c28ed9d1ab4a46f684979bddeeb95a0/kubepods/burstable/pod50613ed4e570b059499cd117ec2baa4a/23ca5ed0299d556ab5dedbcda8942f1a8dc73ee9f335cb0812c737efbe16aadf"
	I0913 18:44:15.672683   74987 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/cc8de98fc0461e18af65f5f40cc8739a3c28ed9d1ab4a46f684979bddeeb95a0/kubepods/burstable/pod50613ed4e570b059499cd117ec2baa4a/23ca5ed0299d556ab5dedbcda8942f1a8dc73ee9f335cb0812c737efbe16aadf/freezer.state
	I0913 18:44:15.682456   74987 api_server.go:204] freezer state: "THAWED"
	I0913 18:44:15.682487   74987 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0913 18:44:15.690716   74987 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0913 18:44:15.690746   74987 status.go:422] ha-867275 apiserver status = Running (err=<nil>)
	I0913 18:44:15.690764   74987 status.go:257] ha-867275 status: &{Name:ha-867275 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 18:44:15.690784   74987 status.go:255] checking status of ha-867275-m02 ...
	I0913 18:44:15.691108   74987 cli_runner.go:164] Run: docker container inspect ha-867275-m02 --format={{.State.Status}}
	I0913 18:44:15.713476   74987 status.go:330] ha-867275-m02 host status = "Stopped" (err=<nil>)
	I0913 18:44:15.713502   74987 status.go:343] host is not running, skipping remaining checks
	I0913 18:44:15.713510   74987 status.go:257] ha-867275-m02 status: &{Name:ha-867275-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 18:44:15.713530   74987 status.go:255] checking status of ha-867275-m03 ...
	I0913 18:44:15.713851   74987 cli_runner.go:164] Run: docker container inspect ha-867275-m03 --format={{.State.Status}}
	I0913 18:44:15.733151   74987 status.go:330] ha-867275-m03 host status = "Running" (err=<nil>)
	I0913 18:44:15.733181   74987 host.go:66] Checking if "ha-867275-m03" exists ...
	I0913 18:44:15.733511   74987 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-867275-m03
	I0913 18:44:15.752908   74987 host.go:66] Checking if "ha-867275-m03" exists ...
	I0913 18:44:15.753244   74987 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 18:44:15.753335   74987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-867275-m03
	I0913 18:44:15.780903   74987 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19636-2205/.minikube/machines/ha-867275-m03/id_rsa Username:docker}
	I0913 18:44:15.883739   74987 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 18:44:15.896685   74987 kubeconfig.go:125] found "ha-867275" server: "https://192.168.49.254:8443"
	I0913 18:44:15.896712   74987 api_server.go:166] Checking apiserver status ...
	I0913 18:44:15.896761   74987 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 18:44:15.910646   74987 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2145/cgroup
	I0913 18:44:15.921089   74987 api_server.go:182] apiserver freezer: "5:freezer:/docker/676a3fd670fe8f787884d13a0129ee01bd60f9da62bdba8896d56777f6c6240c/kubepods/burstable/podc320dab11b53ca9ef87c3a1e921f9f2e/6a0caa09b0b09cffbee0df9a72dba425685353def54c0798b12195f029685e65"
	I0913 18:44:15.921193   74987 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/676a3fd670fe8f787884d13a0129ee01bd60f9da62bdba8896d56777f6c6240c/kubepods/burstable/podc320dab11b53ca9ef87c3a1e921f9f2e/6a0caa09b0b09cffbee0df9a72dba425685353def54c0798b12195f029685e65/freezer.state
	I0913 18:44:15.931147   74987 api_server.go:204] freezer state: "THAWED"
	I0913 18:44:15.931173   74987 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0913 18:44:15.939421   74987 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0913 18:44:15.939451   74987 status.go:422] ha-867275-m03 apiserver status = Running (err=<nil>)
	I0913 18:44:15.939462   74987 status.go:257] ha-867275-m03 status: &{Name:ha-867275-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 18:44:15.939486   74987 status.go:255] checking status of ha-867275-m04 ...
	I0913 18:44:15.939828   74987 cli_runner.go:164] Run: docker container inspect ha-867275-m04 --format={{.State.Status}}
	I0913 18:44:15.958511   74987 status.go:330] ha-867275-m04 host status = "Running" (err=<nil>)
	I0913 18:44:15.958535   74987 host.go:66] Checking if "ha-867275-m04" exists ...
	I0913 18:44:15.958855   74987 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-867275-m04
	I0913 18:44:15.979618   74987 host.go:66] Checking if "ha-867275-m04" exists ...
	I0913 18:44:15.979927   74987 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 18:44:15.979973   74987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-867275-m04
	I0913 18:44:15.997313   74987 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19636-2205/.minikube/machines/ha-867275-m04/id_rsa Username:docker}
	I0913 18:44:16.096568   74987 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 18:44:16.108935   74987 status.go:257] ha-867275-m04 status: &{Name:ha-867275-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (63.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 node start m02 -v=7 --alsologtostderr
E0913 18:44:27.366107    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/functional-109833/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:44:27.372478    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/functional-109833/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:44:27.383834    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/functional-109833/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:44:27.405260    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/functional-109833/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:44:27.446518    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/functional-109833/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:44:27.527947    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/functional-109833/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:44:27.689331    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/functional-109833/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:44:28.011293    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/functional-109833/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:44:28.652840    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/functional-109833/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:44:29.934256    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/functional-109833/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:44:32.495647    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/functional-109833/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:44:37.618177    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/functional-109833/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:44:43.772746    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/addons-751971/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:44:47.860106    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/functional-109833/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:45:08.342276    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/functional-109833/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:45:11.475057    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/addons-751971/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-867275 node start m02 -v=7 --alsologtostderr: (1m2.80934477s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-867275 status -v=7 --alsologtostderr: (1.006004568s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (63.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (227.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-867275 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-867275 -v=7 --alsologtostderr
E0913 18:45:49.303866    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/functional-109833/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-867275 -v=7 --alsologtostderr: (33.908642334s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-867275 --wait=true -v=7 --alsologtostderr
E0913 18:47:11.226179    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/functional-109833/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-867275 --wait=true -v=7 --alsologtostderr: (3m13.217645482s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-867275
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (227.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-867275 node delete m03 -v=7 --alsologtostderr: (10.495389224s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 stop -v=7 --alsologtostderr
E0913 18:49:27.366232    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/functional-109833/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:49:43.772690    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/addons-751971/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-867275 stop -v=7 --alsologtostderr: (32.843499718s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-867275 status -v=7 --alsologtostderr: exit status 7 (115.561466ms)

                                                
                                                
-- stdout --
	ha-867275
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-867275-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-867275-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 18:49:53.660358  102474 out.go:345] Setting OutFile to fd 1 ...
	I0913 18:49:53.660757  102474 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:49:53.660769  102474 out.go:358] Setting ErrFile to fd 2...
	I0913 18:49:53.660775  102474 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:49:53.661041  102474 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-2205/.minikube/bin
	I0913 18:49:53.661238  102474 out.go:352] Setting JSON to false
	I0913 18:49:53.661263  102474 mustload.go:65] Loading cluster: ha-867275
	I0913 18:49:53.661711  102474 config.go:182] Loaded profile config "ha-867275": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 18:49:53.661736  102474 status.go:255] checking status of ha-867275 ...
	I0913 18:49:53.662350  102474 cli_runner.go:164] Run: docker container inspect ha-867275 --format={{.State.Status}}
	I0913 18:49:53.662877  102474 notify.go:220] Checking for updates...
	I0913 18:49:53.680261  102474 status.go:330] ha-867275 host status = "Stopped" (err=<nil>)
	I0913 18:49:53.680281  102474 status.go:343] host is not running, skipping remaining checks
	I0913 18:49:53.680288  102474 status.go:257] ha-867275 status: &{Name:ha-867275 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 18:49:53.680311  102474 status.go:255] checking status of ha-867275-m02 ...
	I0913 18:49:53.680615  102474 cli_runner.go:164] Run: docker container inspect ha-867275-m02 --format={{.State.Status}}
	I0913 18:49:53.697353  102474 status.go:330] ha-867275-m02 host status = "Stopped" (err=<nil>)
	I0913 18:49:53.697374  102474 status.go:343] host is not running, skipping remaining checks
	I0913 18:49:53.697381  102474 status.go:257] ha-867275-m02 status: &{Name:ha-867275-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 18:49:53.697402  102474 status.go:255] checking status of ha-867275-m04 ...
	I0913 18:49:53.697736  102474 cli_runner.go:164] Run: docker container inspect ha-867275-m04 --format={{.State.Status}}
	I0913 18:49:53.726585  102474 status.go:330] ha-867275-m04 host status = "Stopped" (err=<nil>)
	I0913 18:49:53.726609  102474 status.go:343] host is not running, skipping remaining checks
	I0913 18:49:53.726617  102474 status.go:257] ha-867275-m04 status: &{Name:ha-867275-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-867275 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0913 18:49:55.067804    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/functional-109833/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-867275 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m32.007436536s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (93.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (45.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-867275 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-867275 --control-plane -v=7 --alsologtostderr: (44.223736465s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-867275 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-867275 status -v=7 --alsologtostderr: (1.047589146s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (45.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.82s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (35.2s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -p image-873590 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -p image-873590 --driver=docker  --container-runtime=docker: (35.198868239s)
--- PASS: TestImageBuild/serial/Setup (35.20s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.96s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-873590
image_test.go:78: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-873590: (1.962323157s)
--- PASS: TestImageBuild/serial/NormalBuild (1.96s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.42s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-873590
image_test.go:99: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-873590: (1.419640628s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.42s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.95s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-873590
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.95s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.79s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-873590
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.79s)

                                                
                                    
x
+
TestJSONOutput/start/Command (77.71s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-023921 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-023921 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (1m17.712031317s)
--- PASS: TestJSONOutput/start/Command (77.71s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-023921 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.55s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-023921 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.55s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.87s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-023921 --output=json --user=testUser
E0913 18:54:27.366091    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/functional-109833/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-023921 --output=json --user=testUser: (10.871528151s)
--- PASS: TestJSONOutput/stop/Command (10.87s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-340559 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-340559 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (77.369315ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7639424a-373d-4716-8f04-4661ee6d3d08","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-340559] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"df12ef50-9b9f-4c29-aecc-b19198d9e304","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19636"}}
	{"specversion":"1.0","id":"364c02cb-37f7-4422-a031-f2d3bc257ffc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"031064ec-33af-4a0f-b99c-ff2275386398","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19636-2205/kubeconfig"}}
	{"specversion":"1.0","id":"b9d164ab-1882-4b26-8f86-32cffa8b8862","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-2205/.minikube"}}
	{"specversion":"1.0","id":"51bd0903-34ab-4640-b7ac-79bbde9141e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"035bf563-9a00-48d9-bdbe-677e91aa3dac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0ff9cb2b-67c4-4981-a629-b3bf6330931a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-340559" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-340559
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (33.33s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-732095 --network=
E0913 18:54:43.773085    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/addons-751971/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-732095 --network=: (31.11936456s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-732095" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-732095
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-732095: (2.182277801s)
--- PASS: TestKicCustomNetwork/create_custom_network (33.33s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (35.07s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-003759 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-003759 --network=bridge: (32.973271002s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-003759" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-003759
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-003759: (2.079176318s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (35.07s)

                                                
                                    
x
+
TestKicExistingNetwork (36.28s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-776874 --network=existing-network
E0913 18:56:06.836436    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/addons-751971/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-776874 --network=existing-network: (34.124689305s)
helpers_test.go:175: Cleaning up "existing-network-776874" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-776874
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-776874: (1.979593443s)
--- PASS: TestKicExistingNetwork (36.28s)

                                                
                                    
x
+
TestKicCustomSubnet (37.15s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-598366 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-598366 --subnet=192.168.60.0/24: (34.932227661s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-598366 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-598366" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-598366
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-598366: (2.194178656s)
--- PASS: TestKicCustomSubnet (37.15s)

                                                
                                    
x
+
TestKicStaticIP (32.58s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-419379 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-419379 --static-ip=192.168.200.200: (30.327648074s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-419379 ip
helpers_test.go:175: Cleaning up "static-ip-419379" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-419379
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-419379: (2.107036638s)
--- PASS: TestKicStaticIP (32.58s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (69.3s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-618333 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-618333 --driver=docker  --container-runtime=docker: (30.558908707s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-621098 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-621098 --driver=docker  --container-runtime=docker: (33.071016283s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-618333
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-621098
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-621098" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-621098
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-621098: (2.183089343s)
helpers_test.go:175: Cleaning up "first-618333" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-618333
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-618333: (2.137529937s)
--- PASS: TestMinikubeProfile (69.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.83s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-121157 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-121157 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (6.829985713s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.83s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-121157 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (10.62s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-123430 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-123430 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (9.621953205s)
--- PASS: TestMountStart/serial/StartWithMountSecond (10.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-123430 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.53s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-121157 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-121157 --alsologtostderr -v=5: (1.525337433s)
--- PASS: TestMountStart/serial/DeleteFirst (1.53s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-123430 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-123430
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-123430: (1.206174607s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (9.33s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-123430
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-123430: (8.326472084s)
--- PASS: TestMountStart/serial/RestartStopped (9.33s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-123430 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (88.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-285073 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0913 18:59:27.366098    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/functional-109833/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:59:43.772393    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/addons-751971/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-285073 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m27.698527694s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-285073 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (88.45s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (47.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-285073 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-285073 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-285073 -- rollout status deployment/busybox: (3.746027914s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-285073 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-285073 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-285073 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
E0913 19:00:50.429143    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/functional-109833/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-285073 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-285073 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-285073 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-285073 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-285073 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-285073 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-285073 -- exec busybox-7dff88458-lbtkn -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-285073 -- exec busybox-7dff88458-rj6jg -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-285073 -- exec busybox-7dff88458-lbtkn -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-285073 -- exec busybox-7dff88458-rj6jg -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-285073 -- exec busybox-7dff88458-lbtkn -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-285073 -- exec busybox-7dff88458-rj6jg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (47.45s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-285073 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-285073 -- exec busybox-7dff88458-lbtkn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-285073 -- exec busybox-7dff88458-lbtkn -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-285073 -- exec busybox-7dff88458-rj6jg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-285073 -- exec busybox-7dff88458-rj6jg -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.10s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (18.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-285073 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-285073 -v 3 --alsologtostderr: (17.799196878s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-285073 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (18.67s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-285073 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.40s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-285073 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-285073 cp testdata/cp-test.txt multinode-285073:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-285073 ssh -n multinode-285073 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-285073 cp multinode-285073:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile931372569/001/cp-test_multinode-285073.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-285073 ssh -n multinode-285073 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-285073 cp multinode-285073:/home/docker/cp-test.txt multinode-285073-m02:/home/docker/cp-test_multinode-285073_multinode-285073-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-285073 ssh -n multinode-285073 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-285073 ssh -n multinode-285073-m02 "sudo cat /home/docker/cp-test_multinode-285073_multinode-285073-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-285073 cp multinode-285073:/home/docker/cp-test.txt multinode-285073-m03:/home/docker/cp-test_multinode-285073_multinode-285073-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-285073 ssh -n multinode-285073 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-285073 ssh -n multinode-285073-m03 "sudo cat /home/docker/cp-test_multinode-285073_multinode-285073-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-285073 cp testdata/cp-test.txt multinode-285073-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-285073 ssh -n multinode-285073-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-285073 cp multinode-285073-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile931372569/001/cp-test_multinode-285073-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-285073 ssh -n multinode-285073-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-285073 cp multinode-285073-m02:/home/docker/cp-test.txt multinode-285073:/home/docker/cp-test_multinode-285073-m02_multinode-285073.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-285073 ssh -n multinode-285073-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-285073 ssh -n multinode-285073 "sudo cat /home/docker/cp-test_multinode-285073-m02_multinode-285073.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-285073 cp multinode-285073-m02:/home/docker/cp-test.txt multinode-285073-m03:/home/docker/cp-test_multinode-285073-m02_multinode-285073-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-285073 ssh -n multinode-285073-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-285073 ssh -n multinode-285073-m03 "sudo cat /home/docker/cp-test_multinode-285073-m02_multinode-285073-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-285073 cp testdata/cp-test.txt multinode-285073-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-285073 ssh -n multinode-285073-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-285073 cp multinode-285073-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile931372569/001/cp-test_multinode-285073-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-285073 ssh -n multinode-285073-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-285073 cp multinode-285073-m03:/home/docker/cp-test.txt multinode-285073:/home/docker/cp-test_multinode-285073-m03_multinode-285073.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-285073 ssh -n multinode-285073-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-285073 ssh -n multinode-285073 "sudo cat /home/docker/cp-test_multinode-285073-m03_multinode-285073.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-285073 cp multinode-285073-m03:/home/docker/cp-test.txt multinode-285073-m02:/home/docker/cp-test_multinode-285073-m03_multinode-285073-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-285073 ssh -n multinode-285073-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-285073 ssh -n multinode-285073-m02 "sudo cat /home/docker/cp-test_multinode-285073-m03_multinode-285073-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.81s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-285073 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-285073 node stop m03: (1.224037707s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-285073 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-285073 status: exit status 7 (535.097688ms)

                                                
                                                
-- stdout --
	multinode-285073
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-285073-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-285073-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-285073 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-285073 status --alsologtostderr: exit status 7 (549.548434ms)

                                                
                                                
-- stdout --
	multinode-285073
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-285073-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-285073-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 19:02:02.760221  177138 out.go:345] Setting OutFile to fd 1 ...
	I0913 19:02:02.760405  177138 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 19:02:02.760436  177138 out.go:358] Setting ErrFile to fd 2...
	I0913 19:02:02.760464  177138 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 19:02:02.760759  177138 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-2205/.minikube/bin
	I0913 19:02:02.760975  177138 out.go:352] Setting JSON to false
	I0913 19:02:02.761032  177138 mustload.go:65] Loading cluster: multinode-285073
	I0913 19:02:02.761070  177138 notify.go:220] Checking for updates...
	I0913 19:02:02.761538  177138 config.go:182] Loaded profile config "multinode-285073": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 19:02:02.761580  177138 status.go:255] checking status of multinode-285073 ...
	I0913 19:02:02.762847  177138 cli_runner.go:164] Run: docker container inspect multinode-285073 --format={{.State.Status}}
	I0913 19:02:02.781836  177138 status.go:330] multinode-285073 host status = "Running" (err=<nil>)
	I0913 19:02:02.781859  177138 host.go:66] Checking if "multinode-285073" exists ...
	I0913 19:02:02.782326  177138 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-285073
	I0913 19:02:02.808703  177138 host.go:66] Checking if "multinode-285073" exists ...
	I0913 19:02:02.809019  177138 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 19:02:02.809068  177138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-285073
	I0913 19:02:02.831070  177138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19636-2205/.minikube/machines/multinode-285073/id_rsa Username:docker}
	I0913 19:02:02.931822  177138 ssh_runner.go:195] Run: systemctl --version
	I0913 19:02:02.936376  177138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 19:02:02.948835  177138 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0913 19:02:03.004912  177138 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-09-13 19:02:02.994112685 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0913 19:02:03.005713  177138 kubeconfig.go:125] found "multinode-285073" server: "https://192.168.67.2:8443"
	I0913 19:02:03.005741  177138 api_server.go:166] Checking apiserver status ...
	I0913 19:02:03.005807  177138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 19:02:03.027727  177138 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2314/cgroup
	I0913 19:02:03.039162  177138 api_server.go:182] apiserver freezer: "5:freezer:/docker/8e1090c7cd721044d01cb582555049e96034726cc69f48212671ea50a8149c88/kubepods/burstable/pod7949e0a8ea959c17cf21f7da88945704/63870148b163bf6eff2b41ce94038a24eb08bb46a0caa94e646b31ab9c04724d"
	I0913 19:02:03.039244  177138 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/8e1090c7cd721044d01cb582555049e96034726cc69f48212671ea50a8149c88/kubepods/burstable/pod7949e0a8ea959c17cf21f7da88945704/63870148b163bf6eff2b41ce94038a24eb08bb46a0caa94e646b31ab9c04724d/freezer.state
	I0913 19:02:03.049743  177138 api_server.go:204] freezer state: "THAWED"
	I0913 19:02:03.049782  177138 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0913 19:02:03.057865  177138 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0913 19:02:03.057900  177138 status.go:422] multinode-285073 apiserver status = Running (err=<nil>)
	I0913 19:02:03.057912  177138 status.go:257] multinode-285073 status: &{Name:multinode-285073 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 19:02:03.057929  177138 status.go:255] checking status of multinode-285073-m02 ...
	I0913 19:02:03.058290  177138 cli_runner.go:164] Run: docker container inspect multinode-285073-m02 --format={{.State.Status}}
	I0913 19:02:03.076163  177138 status.go:330] multinode-285073-m02 host status = "Running" (err=<nil>)
	I0913 19:02:03.076196  177138 host.go:66] Checking if "multinode-285073-m02" exists ...
	I0913 19:02:03.076528  177138 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-285073-m02
	I0913 19:02:03.096959  177138 host.go:66] Checking if "multinode-285073-m02" exists ...
	I0913 19:02:03.097299  177138 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 19:02:03.097348  177138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-285073-m02
	I0913 19:02:03.116652  177138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/19636-2205/.minikube/machines/multinode-285073-m02/id_rsa Username:docker}
	I0913 19:02:03.220194  177138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 19:02:03.232647  177138 status.go:257] multinode-285073-m02 status: &{Name:multinode-285073-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0913 19:02:03.232685  177138 status.go:255] checking status of multinode-285073-m03 ...
	I0913 19:02:03.233007  177138 cli_runner.go:164] Run: docker container inspect multinode-285073-m03 --format={{.State.Status}}
	I0913 19:02:03.249930  177138 status.go:330] multinode-285073-m03 host status = "Stopped" (err=<nil>)
	I0913 19:02:03.249956  177138 status.go:343] host is not running, skipping remaining checks
	I0913 19:02:03.249964  177138 status.go:257] multinode-285073-m03 status: &{Name:multinode-285073-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.31s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (11.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-285073 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-285073 node start m03 -v=7 --alsologtostderr: (10.37447863s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-285073 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (11.14s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (108.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-285073
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-285073
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-285073: (22.568572316s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-285073 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-285073 --wait=true -v=8 --alsologtostderr: (1m25.834379811s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-285073
--- PASS: TestMultiNode/serial/RestartKeepsNodes (108.53s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-285073 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-285073 node delete m03: (4.943391654s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-285073 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.64s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-285073 stop
E0913 19:04:27.367050    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/functional-109833/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-285073 stop: (21.598823736s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-285073 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-285073 status: exit status 7 (97.081915ms)

                                                
                                                
-- stdout --
	multinode-285073
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-285073-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-285073 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-285073 status --alsologtostderr: exit status 7 (103.230507ms)

                                                
                                                
-- stdout --
	multinode-285073
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-285073-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 19:04:30.312051  190782 out.go:345] Setting OutFile to fd 1 ...
	I0913 19:04:30.312251  190782 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 19:04:30.312285  190782 out.go:358] Setting ErrFile to fd 2...
	I0913 19:04:30.312310  190782 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 19:04:30.312603  190782 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-2205/.minikube/bin
	I0913 19:04:30.312827  190782 out.go:352] Setting JSON to false
	I0913 19:04:30.312889  190782 mustload.go:65] Loading cluster: multinode-285073
	I0913 19:04:30.312920  190782 notify.go:220] Checking for updates...
	I0913 19:04:30.313402  190782 config.go:182] Loaded profile config "multinode-285073": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 19:04:30.313711  190782 status.go:255] checking status of multinode-285073 ...
	I0913 19:04:30.314542  190782 cli_runner.go:164] Run: docker container inspect multinode-285073 --format={{.State.Status}}
	I0913 19:04:30.337342  190782 status.go:330] multinode-285073 host status = "Stopped" (err=<nil>)
	I0913 19:04:30.337363  190782 status.go:343] host is not running, skipping remaining checks
	I0913 19:04:30.337370  190782 status.go:257] multinode-285073 status: &{Name:multinode-285073 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 19:04:30.337395  190782 status.go:255] checking status of multinode-285073-m02 ...
	I0913 19:04:30.337720  190782 cli_runner.go:164] Run: docker container inspect multinode-285073-m02 --format={{.State.Status}}
	I0913 19:04:30.365528  190782 status.go:330] multinode-285073-m02 host status = "Stopped" (err=<nil>)
	I0913 19:04:30.365550  190782 status.go:343] host is not running, skipping remaining checks
	I0913 19:04:30.365556  190782 status.go:257] multinode-285073-m02 status: &{Name:multinode-285073-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.80s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (56.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-285073 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0913 19:04:43.772708    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/addons-751971/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-285073 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (56.036166203s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-285073 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (56.73s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-285073
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-285073-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-285073-m02 --driver=docker  --container-runtime=docker: exit status 14 (81.307325ms)

                                                
                                                
-- stdout --
	* [multinode-285073-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19636-2205/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-2205/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-285073-m02' is duplicated with machine name 'multinode-285073-m02' in profile 'multinode-285073'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-285073-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-285073-m03 --driver=docker  --container-runtime=docker: (32.384816868s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-285073
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-285073: exit status 80 (529.028078ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-285073 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-285073-m03 already exists in multinode-285073-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-285073-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-285073-m03: (2.351452287s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.41s)

                                                
                                    
x
+
TestPreload (105.29s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-609779 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-609779 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (1m4.657000121s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-609779 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-609779 image pull gcr.io/k8s-minikube/busybox: (2.426681013s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-609779
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-609779: (10.970224272s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-609779 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-609779 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (24.647161554s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-609779 image list
helpers_test.go:175: Cleaning up "test-preload-609779" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-609779
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-609779: (2.247029727s)
--- PASS: TestPreload (105.29s)

                                                
                                    
x
+
TestScheduledStopUnix (105.34s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-949168 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-949168 --memory=2048 --driver=docker  --container-runtime=docker: (32.068131606s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-949168 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-949168 -n scheduled-stop-949168
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-949168 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-949168 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-949168 -n scheduled-stop-949168
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-949168
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-949168 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0913 19:09:27.366136    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/functional-109833/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-949168
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-949168: exit status 7 (62.085613ms)

                                                
                                                
-- stdout --
	scheduled-stop-949168
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-949168 -n scheduled-stop-949168
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-949168 -n scheduled-stop-949168: exit status 7 (64.251524ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-949168" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-949168
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-949168: (1.644451596s)
--- PASS: TestScheduledStopUnix (105.34s)

                                                
                                    
x
+
TestSkaffold (119.79s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe3241681438 version
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p skaffold-174791 --memory=2600 --driver=docker  --container-runtime=docker
E0913 19:09:43.772317    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/addons-751971/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p skaffold-174791 --memory=2600 --driver=docker  --container-runtime=docker: (32.693275348s)
skaffold_test.go:86: copying out/minikube-linux-arm64 to /home/jenkins/workspace/Docker_Linux_docker_arm64/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe3241681438 run --minikube-profile skaffold-174791 --kube-context skaffold-174791 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe3241681438 run --minikube-profile skaffold-174791 --kube-context skaffold-174791 --status-check=true --port-forward=false --interactive=false: (1m11.668092635s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-7fc988c9bb-t7cld" [34230db7-5ae4-4b53-90d6-59c8595cad85] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.00412812s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-5dbdf67db7-jhxfb" [1c2be52a-1057-48e3-9931-aa1452bd1852] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.004731526s
helpers_test.go:175: Cleaning up "skaffold-174791" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p skaffold-174791
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p skaffold-174791: (2.97487048s)
--- PASS: TestSkaffold (119.79s)

                                                
                                    
x
+
TestInsufficientStorage (13.77s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-273258 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-273258 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (11.501405151s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"354d02aa-6c7f-494a-b8b4-2c5072577c2d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-273258] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"67fcc459-bc7a-4f4c-b7bb-98c8fa7795aa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19636"}}
	{"specversion":"1.0","id":"40d7d8a5-07df-40e8-80b3-d70f35f00c36","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"67a80eab-400d-4587-a72c-cff3f716e00f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19636-2205/kubeconfig"}}
	{"specversion":"1.0","id":"c8bcdfe9-98c6-4c6d-acc3-34a0d2f787a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-2205/.minikube"}}
	{"specversion":"1.0","id":"bd326186-0cde-450a-9530-d96ebb00cc92","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"51b76b9f-c3af-40f6-a7c1-c6bd60d87ce0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"dfbed7b9-22a8-4dbc-8e90-5fb21ba2eadf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"a3bc6d2a-1b5c-42b3-9d2c-e9da2dab90ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"bd2a45aa-0ea7-4c22-a77e-79956356067f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"9ea14f7b-c8af-4dee-aa35-a56874aec747","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"fec0072e-140e-40cc-a1b7-bdef2363a062","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-273258\" primary control-plane node in \"insufficient-storage-273258\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"827d0551-5904-4f02-97d1-ab21b4ac24c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1726193793-19634 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"a4cdd713-25a6-4a3a-941a-3885afc4efed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"81d11e8b-8955-403c-b22a-977d563d0d26","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-273258 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-273258 --output=json --layout=cluster: exit status 7 (292.090909ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-273258","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-273258","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0913 19:11:48.781890  224843 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-273258" does not appear in /home/jenkins/minikube-integration/19636-2205/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-273258 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-273258 --output=json --layout=cluster: exit status 7 (286.133297ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-273258","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-273258","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0913 19:11:49.069924  224904 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-273258" does not appear in /home/jenkins/minikube-integration/19636-2205/kubeconfig
	E0913 19:11:49.080186  224904 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/insufficient-storage-273258/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-273258" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-273258
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-273258: (1.684564704s)
--- PASS: TestInsufficientStorage (13.77s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (135.21s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2747803209 start -p running-upgrade-329091 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0913 19:14:43.772554    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/addons-751971/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2747803209 start -p running-upgrade-329091 --memory=2200 --vm-driver=docker  --container-runtime=docker: (1m26.263527956s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-329091 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0913 19:16:22.993417    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/skaffold-174791/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:16:22.999893    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/skaffold-174791/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:16:23.011871    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/skaffold-174791/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:16:23.033790    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/skaffold-174791/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:16:23.075283    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/skaffold-174791/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:16:23.156679    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/skaffold-174791/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:16:23.318013    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/skaffold-174791/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:16:23.639632    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/skaffold-174791/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:16:24.281235    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/skaffold-174791/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:16:25.563205    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/skaffold-174791/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:16:28.124494    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/skaffold-174791/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:16:33.245957    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/skaffold-174791/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:16:43.488103    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/skaffold-174791/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-329091 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (45.786089208s)
helpers_test.go:175: Cleaning up "running-upgrade-329091" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-329091
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-329091: (2.380197319s)
--- PASS: TestRunningBinaryUpgrade (135.21s)

                                                
                                    
x
+
TestKubernetesUpgrade (391.72s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-340790 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-340790 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m1.310592765s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-340790
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-340790: (11.089326816s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-340790 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-340790 status --format={{.Host}}: exit status 7 (124.561084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-340790 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-340790 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m47.50336518s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-340790 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-340790 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-340790 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker: exit status 106 (106.807285ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-340790] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19636-2205/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-2205/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-340790
	    minikube start -p kubernetes-upgrade-340790 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3407902 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-340790 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-340790 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-340790 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (28.502893375s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-340790" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-340790
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-340790: (2.937800889s)
--- PASS: TestKubernetesUpgrade (391.72s)

                                                
                                    
x
+
TestMissingContainerUpgrade (120.37s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.1737882102 start -p missing-upgrade-910276 --memory=2200 --driver=docker  --container-runtime=docker
E0913 19:17:03.970014    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/skaffold-174791/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.1737882102 start -p missing-upgrade-910276 --memory=2200 --driver=docker  --container-runtime=docker: (41.388138928s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-910276
E0913 19:17:44.932470    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/skaffold-174791/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-910276: (10.461644085s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-910276
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-910276 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-910276 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m3.740341692s)
helpers_test.go:175: Cleaning up "missing-upgrade-910276" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-910276
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-910276: (2.891041789s)
--- PASS: TestMissingContainerUpgrade (120.37s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.89s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.89s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (84.95s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.105721182 start -p stopped-upgrade-648340 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0913 19:19:06.857284    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/skaffold-174791/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:19:27.366735    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/functional-109833/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:19:43.772768    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/addons-751971/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.105721182 start -p stopped-upgrade-648340 --memory=2200 --vm-driver=docker  --container-runtime=docker: (50.152143288s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.105721182 -p stopped-upgrade-648340 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.105721182 -p stopped-upgrade-648340 stop: (2.032778626s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-648340 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-648340 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (32.768233289s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (84.95s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.32s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-648340
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-648340: (1.316592509s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.32s)

                                                
                                    
x
+
TestPause/serial/Start (47.08s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-575470 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-575470 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (47.082966613s)
--- PASS: TestPause/serial/Start (47.08s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (30.74s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-575470 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0913 19:21:22.993362    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/skaffold-174791/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-575470 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (30.724438025s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (30.74s)

                                                
                                    
x
+
TestPause/serial/Pause (0.61s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-575470 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.61s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.36s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-575470 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-575470 --output=json --layout=cluster: exit status 2 (361.286177ms)

                                                
                                                
-- stdout --
	{"Name":"pause-575470","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-575470","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.36s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.52s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-575470 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.52s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.09s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-575470 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-575470 --alsologtostderr -v=5: (1.086004518s)
--- PASS: TestPause/serial/PauseAgain (1.09s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.23s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-575470 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-575470 --alsologtostderr -v=5: (2.231668352s)
--- PASS: TestPause/serial/DeletePaused (2.23s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.35s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-575470
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-575470: exit status 1 (15.190687ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-575470: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-349303 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-349303 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (79.208418ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-349303] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19636-2205/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-2205/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (36.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-349303 --driver=docker  --container-runtime=docker
E0913 19:21:50.698636    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/skaffold-174791/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-349303 --driver=docker  --container-runtime=docker: (36.025446004s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-349303 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (36.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-349303 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-349303 --no-kubernetes --driver=docker  --container-runtime=docker: (16.690147126s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-349303 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-349303 status -o json: exit status 2 (430.019719ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-349303","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-349303
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-349303: (1.802198041s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-349303 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-349303 --no-kubernetes --driver=docker  --container-runtime=docker: (8.858746661s)
--- PASS: TestNoKubernetes/serial/Start (8.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-349303 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-349303 "sudo systemctl is-active --quiet service kubelet": exit status 1 (295.789884ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-349303
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-349303: (1.200199042s)
--- PASS: TestNoKubernetes/serial/Stop (1.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-349303 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-349303 --driver=docker  --container-runtime=docker: (8.409696392s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-349303 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-349303 "sudo systemctl is-active --quiet service kubelet": exit status 1 (274.052175ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (55.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-501053 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-501053 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (55.191121059s)
--- PASS: TestNetworkPlugins/group/auto/Start (55.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-501053 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-501053 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-5qdpl" [ef84d662-c0f3-4fb3-aec8-6388677b3794] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-5qdpl" [ef84d662-c0f3-4fb3-aec8-6388677b3794] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.00441235s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (61.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-501053 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-501053 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (1m1.602949721s)
--- PASS: TestNetworkPlugins/group/flannel/Start (61.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-501053 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-501053 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-501053 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (82.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-501053 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
E0913 19:24:43.772862    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/addons-751971/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-501053 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m22.394237157s)
--- PASS: TestNetworkPlugins/group/calico/Start (82.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-q2vp5" [0d7fd93a-f295-435d-9da1-73bbde4dff6d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004753732s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-501053 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-501053 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-k4lnf" [84a55907-a46c-44c7-a295-883a0bc93f09] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-k4lnf" [84a55907-a46c-44c7-a295-883a0bc93f09] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.003679233s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-501053 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-501053 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-501053 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (55.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-501053 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-501053 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (55.199545008s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (55.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-dlmmr" [6e2cdd56-e83e-4ee2-b2ae-986af5734118] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006494746s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-501053 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-501053 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-rmcqt" [a64841e2-8ae0-41ae-a34e-1ea6816de650] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-rmcqt" [a64841e2-8ae0-41ae-a34e-1ea6816de650] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.003013563s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-501053 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-501053 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-501053 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-501053 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-501053 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-m56lt" [cb71dc10-3fcc-4452-b42d-d1019a3bc089] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-m56lt" [cb71dc10-3fcc-4452-b42d-d1019a3bc089] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.004584582s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (56.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p false-501053 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p false-501053 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (56.077067279s)
--- PASS: TestNetworkPlugins/group/false/Start (56.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-501053 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-501053 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-501053 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (72.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-501053 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-501053 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (1m12.691015645s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (72.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p false-501053 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (12.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-501053 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-77nzd" [36c56d83-d708-47b3-be7a-4247f1519718] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-77nzd" [36c56d83-d708-47b3-be7a-4247f1519718] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 12.004308522s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (12.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-501053 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-501053 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-501053 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (47.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kubenet-501053 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kubenet-501053 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (47.611452307s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (47.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-lgwcs" [4556e779-ba4d-4e4d-9245-f2d8f47d948a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00413285s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-501053 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-501053 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-jcfl2" [e3c9fa8b-30fd-4ac1-9fe3-2724b1fdec29] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-jcfl2" [e3c9fa8b-30fd-4ac1-9fe3-2724b1fdec29] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004524864s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-501053 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-501053 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-501053 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kubenet-501053 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (11.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-501053 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-76vl8" [ee4e4f88-a010-4c45-8d0c-b87619127103] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-76vl8" [ee4e4f88-a010-4c45-8d0c-b87619127103] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 11.004675012s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (11.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (81.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-501053 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-501053 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (1m21.902277662s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (81.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-501053 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-501053 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-501053 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (53.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-501053 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
E0913 19:30:04.493583    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/flannel-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:30:04.499933    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/flannel-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:30:04.511307    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/flannel-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:30:04.532676    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/flannel-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:30:04.574100    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/flannel-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:30:04.656973    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/flannel-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:30:04.819009    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/flannel-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:30:05.140238    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/flannel-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:30:05.781888    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/flannel-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:30:07.063152    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/flannel-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:30:09.625338    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/flannel-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:30:14.747586    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/flannel-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:30:23.490491    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/auto-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:30:24.989759    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/flannel-501053/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-501053 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (53.719037927s)
--- PASS: TestNetworkPlugins/group/bridge/Start (53.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-501053 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-501053 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-lqkzs" [91c7816f-fee1-457e-a99a-4fdc12d6c13c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-lqkzs" [91c7816f-fee1-457e-a99a-4fdc12d6c13c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.004641096s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-501053 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-501053 replace --force -f testdata/netcat-deployment.yaml
E0913 19:30:45.471339    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/flannel-501053/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-mqlbp" [f399a97b-0a7f-4560-afa6-370ba9387fb2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-mqlbp" [f399a97b-0a7f-4560-afa6-370ba9387fb2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004355664s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-501053 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-501053 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-501053 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-501053 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-501053 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-501053 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (156.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-321615 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-321615 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m36.133007513s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (156.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (85.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-867855 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0913 19:31:22.994222    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/skaffold-174791/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:31:23.459775    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/calico-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:31:26.433401    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/flannel-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:31:43.941908    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/calico-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:31:45.411863    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/auto-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:31:48.792801    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/custom-flannel-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:31:48.799102    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/custom-flannel-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:31:48.810426    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/custom-flannel-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:31:48.831738    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/custom-flannel-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:31:48.873071    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/custom-flannel-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:31:48.954469    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/custom-flannel-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:31:49.115963    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/custom-flannel-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:31:49.437330    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/custom-flannel-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:31:50.079161    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/custom-flannel-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:31:51.360825    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/custom-flannel-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:31:53.922850    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/custom-flannel-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:31:59.044156    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/custom-flannel-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:32:09.286125    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/custom-flannel-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:32:24.903188    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/calico-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:32:29.768015    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/custom-flannel-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:32:46.060631    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/skaffold-174791/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:32:46.553124    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/false-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:32:46.559603    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/false-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:32:46.571047    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/false-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:32:46.592651    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/false-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:32:46.634595    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/false-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:32:46.715964    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/false-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:32:46.877811    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/false-501053/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-867855 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (1m25.481113598s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (85.48s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-867855 create -f testdata/busybox.yaml
E0913 19:32:47.200080    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/false-501053/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a2c0d592-5849-49f7-afc2-b340bc58f9a3] Pending
E0913 19:32:47.842232    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/false-501053/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [a2c0d592-5849-49f7-afc2-b340bc58f9a3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0913 19:32:48.355109    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/flannel-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:32:49.124259    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/false-501053/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [a2c0d592-5849-49f7-afc2-b340bc58f9a3] Running
E0913 19:32:51.685946    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/false-501053/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004532094s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-867855 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-867855 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0913 19:32:56.807909    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/false-501053/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-867855 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-867855 --alsologtostderr -v=3
E0913 19:33:07.052057    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/false-501053/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-867855 --alsologtostderr -v=3: (10.92079967s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-867855 -n no-preload-867855
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-867855 -n no-preload-867855: exit status 7 (69.531495ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-867855 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (269.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-867855 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0913 19:33:10.729441    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/custom-flannel-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:33:27.533594    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/false-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:33:41.604052    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/kindnet-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:33:41.610503    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/kindnet-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:33:41.621953    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/kindnet-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:33:41.643429    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/kindnet-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:33:41.684803    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/kindnet-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:33:41.766191    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/kindnet-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:33:41.927773    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/kindnet-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:33:42.256107    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/kindnet-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:33:42.899348    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/kindnet-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:33:44.181049    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/kindnet-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:33:46.742428    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/kindnet-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:33:46.824954    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/calico-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:33:51.864755    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/kindnet-501053/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-867855 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m28.567324277s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-867855 -n no-preload-867855
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (269.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-321615 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c9ed6553-edc5-4424-9496-b5f47f357612] Pending
helpers_test.go:344: "busybox" [c9ed6553-edc5-4424-9496-b5f47f357612] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c9ed6553-edc5-4424-9496-b5f47f357612] Running
E0913 19:34:01.548697    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/auto-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:34:02.106835    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/kindnet-501053/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.005070183s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-321615 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-321615 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-321615 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-321615 --alsologtostderr -v=3
E0913 19:34:08.495952    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/false-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:34:10.432720    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/functional-109833/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:34:12.691986    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/kubenet-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:34:12.698469    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/kubenet-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:34:12.710128    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/kubenet-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:34:12.731688    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/kubenet-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:34:12.773087    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/kubenet-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:34:12.854573    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/kubenet-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:34:13.017510    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/kubenet-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:34:13.339944    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/kubenet-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:34:13.982301    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/kubenet-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:34:15.263889    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/kubenet-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:34:17.826521    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/kubenet-501053/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-321615 --alsologtostderr -v=3: (11.081074989s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-321615 -n old-k8s-version-321615
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-321615 -n old-k8s-version-321615: exit status 7 (74.726706ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-321615 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (141.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-321615 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0913 19:34:22.588340    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/kindnet-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:34:22.948808    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/kubenet-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:34:27.366459    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/functional-109833/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:34:29.254441    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/auto-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:34:32.651259    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/custom-flannel-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:34:33.190712    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/kubenet-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:34:43.772909    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/addons-751971/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:34:53.672809    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/kubenet-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:35:03.549963    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/kindnet-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:35:04.492974    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/flannel-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:35:30.417237    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/false-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:35:32.196469    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/flannel-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:35:34.634862    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/kubenet-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:35:43.051498    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/bridge-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:35:43.057850    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/bridge-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:35:43.069323    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/bridge-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:35:43.090829    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/bridge-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:35:43.132320    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/bridge-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:35:43.213857    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/bridge-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:35:43.375456    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/bridge-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:35:43.697115    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/bridge-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:35:44.338824    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/bridge-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:35:45.620177    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/bridge-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:35:45.744974    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/enable-default-cni-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:35:45.751384    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/enable-default-cni-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:35:45.762885    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/enable-default-cni-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:35:45.784258    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/enable-default-cni-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:35:45.825706    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/enable-default-cni-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:35:45.907137    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/enable-default-cni-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:35:46.068627    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/enable-default-cni-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:35:46.390295    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/enable-default-cni-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:35:47.031614    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/enable-default-cni-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:35:48.181762    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/bridge-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:35:48.313221    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/enable-default-cni-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:35:50.875254    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/enable-default-cni-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:35:53.303254    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/bridge-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:35:55.997422    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/enable-default-cni-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:36:02.961039    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/calico-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:36:03.544629    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/bridge-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:36:06.239411    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/enable-default-cni-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:36:22.992865    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/skaffold-174791/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:36:24.025985    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/bridge-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:36:25.471269    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/kindnet-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:36:26.720677    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/enable-default-cni-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:36:30.666865    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/calico-501053/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-321615 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m21.293256898s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-321615 -n old-k8s-version-321615
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (141.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-2m9rd" [3c77424c-e11a-4413-8287-1f9d5036163d] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004863881s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-2m9rd" [3c77424c-e11a-4413-8287-1f9d5036163d] Running
E0913 19:36:48.792741    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/custom-flannel-501053/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003955573s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-321615 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-321615 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.84s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-321615 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-321615 -n old-k8s-version-321615
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-321615 -n old-k8s-version-321615: exit status 2 (342.865704ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-321615 -n old-k8s-version-321615
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-321615 -n old-k8s-version-321615: exit status 2 (374.362638ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-321615 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-321615 -n old-k8s-version-321615
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-321615 -n old-k8s-version-321615
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (77.91s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-205084 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0913 19:37:04.987952    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/bridge-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:37:07.682498    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/enable-default-cni-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:37:16.492581    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/custom-flannel-501053/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-205084 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (1m17.905150077s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (77.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-8696f" [5597071a-8aa0-449c-ba36-db871c59fbd2] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004680193s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-8696f" [5597071a-8aa0-449c-ba36-db871c59fbd2] Running
E0913 19:37:46.553243    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/false-501053/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004475875s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-867855 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-867855 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-867855 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-867855 -n no-preload-867855
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-867855 -n no-preload-867855: exit status 2 (341.922408ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-867855 -n no-preload-867855
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-867855 -n no-preload-867855: exit status 2 (349.299473ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-867855 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-867855 -n no-preload-867855
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-867855 -n no-preload-867855
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (74.72s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-036848 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0913 19:38:14.258882    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/false-501053/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-036848 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (1m14.719382673s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (74.72s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-205084 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [59ed1146-6382-4550-b32e-22ba68b3b55f] Pending
helpers_test.go:344: "busybox" [59ed1146-6382-4550-b32e-22ba68b3b55f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [59ed1146-6382-4550-b32e-22ba68b3b55f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004771134s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-205084 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-205084 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0913 19:38:26.909969    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/bridge-501053/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-205084 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.099866757s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-205084 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-205084 --alsologtostderr -v=3
E0913 19:38:29.604125    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/enable-default-cni-501053/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-205084 --alsologtostderr -v=3: (10.876507994s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-205084 -n embed-certs-205084
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-205084 -n embed-certs-205084: exit status 7 (67.476958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-205084 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (266.84s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-205084 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0913 19:38:41.603948    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/kindnet-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:38:57.203587    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/old-k8s-version-321615/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:38:57.210012    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/old-k8s-version-321615/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:38:57.221500    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/old-k8s-version-321615/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:38:57.242938    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/old-k8s-version-321615/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:38:57.284374    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/old-k8s-version-321615/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:38:57.366179    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/old-k8s-version-321615/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:38:57.527558    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/old-k8s-version-321615/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:38:57.849193    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/old-k8s-version-321615/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:38:58.491009    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/old-k8s-version-321615/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:38:59.773163    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/old-k8s-version-321615/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:39:01.548313    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/auto-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:39:02.334456    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/old-k8s-version-321615/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:39:07.456439    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/old-k8s-version-321615/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:39:09.312710    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/kindnet-501053/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-205084 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m26.486665965s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-205084 -n embed-certs-205084
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (266.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-036848 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0bf80478-257d-4cd2-b0b9-c0bf001b6d15] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0913 19:39:12.692105    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/kubenet-501053/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [0bf80478-257d-4cd2-b0b9-c0bf001b6d15] Running
E0913 19:39:17.698322    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/old-k8s-version-321615/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003589631s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-036848 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-036848 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-036848 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-036848 --alsologtostderr -v=3
E0913 19:39:27.366531    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/functional-109833/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-036848 --alsologtostderr -v=3: (10.957693768s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-036848 -n default-k8s-diff-port-036848
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-036848 -n default-k8s-diff-port-036848: exit status 7 (73.647413ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-036848 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (268.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-036848 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0913 19:39:38.179764    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/old-k8s-version-321615/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:39:40.397834    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/kubenet-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:39:43.773067    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/addons-751971/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:40:04.493203    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/flannel-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:40:19.142104    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/old-k8s-version-321615/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:40:43.051959    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/bridge-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:40:45.744893    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/enable-default-cni-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:41:02.961364    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/calico-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:41:10.751529    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/bridge-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:41:13.445533    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/enable-default-cni-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:41:22.993423    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/skaffold-174791/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:41:41.063514    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/old-k8s-version-321615/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:41:48.792027    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/custom-flannel-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:42:46.553023    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/false-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:42:47.310749    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/no-preload-867855/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:42:47.317252    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/no-preload-867855/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:42:47.328639    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/no-preload-867855/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:42:47.350099    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/no-preload-867855/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:42:47.391649    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/no-preload-867855/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:42:47.473097    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/no-preload-867855/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:42:47.634663    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/no-preload-867855/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:42:47.956337    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/no-preload-867855/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:42:48.597594    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/no-preload-867855/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:42:49.879110    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/no-preload-867855/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:42:52.440480    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/no-preload-867855/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:42:57.562540    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/no-preload-867855/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-036848 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m27.94064113s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-036848 -n default-k8s-diff-port-036848
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (268.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-7xpr5" [926f42e7-596c-49bd-a890-4ff64454e452] Running
E0913 19:43:07.804659    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/no-preload-867855/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005322333s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-7xpr5" [926f42e7-596c-49bd-a890-4ff64454e452] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004361616s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-205084 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-205084 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-205084 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-205084 -n embed-certs-205084
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-205084 -n embed-certs-205084: exit status 2 (381.028066ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-205084 -n embed-certs-205084
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-205084 -n embed-certs-205084: exit status 2 (349.440099ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-205084 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-205084 -n embed-certs-205084
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-205084 -n embed-certs-205084
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (43.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-646184 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0913 19:43:28.286295    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/no-preload-867855/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:43:41.603551    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/kindnet-501053/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:43:57.203794    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/old-k8s-version-321615/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-646184 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (43.205165008s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (43.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-bgqm9" [232d1484-e9bb-4006-a3b1-040f180b4d70] Running
E0913 19:44:01.548416    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/auto-501053/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004841478s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-bgqm9" [232d1484-e9bb-4006-a3b1-040f180b4d70] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003977717s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-036848 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-646184 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-646184 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.148883171s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-646184 --alsologtostderr -v=3
E0913 19:44:09.247727    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/no-preload-867855/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-646184 --alsologtostderr -v=3: (11.086003993s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-036848 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-036848 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-036848 -n default-k8s-diff-port-036848
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-036848 -n default-k8s-diff-port-036848: exit status 2 (335.193517ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-036848 -n default-k8s-diff-port-036848
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-036848 -n default-k8s-diff-port-036848: exit status 2 (315.199623ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-036848 --alsologtostderr -v=1
E0913 19:44:12.692508    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/kubenet-501053/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-036848 -n default-k8s-diff-port-036848
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-036848 -n default-k8s-diff-port-036848
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-646184 -n newest-cni-646184
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-646184 -n newest-cni-646184: exit status 7 (79.005888ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-646184 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (17.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-646184 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0913 19:44:24.905184    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/old-k8s-version-321615/client.crt: no such file or directory" logger="UnhandledError"
E0913 19:44:27.366330    7564 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-2205/.minikube/profiles/functional-109833/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-646184 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (17.530948999s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-646184 -n newest-cni-646184
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (17.88s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-646184 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.04s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-646184 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-646184 -n newest-cni-646184
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-646184 -n newest-cni-646184: exit status 2 (337.82404ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-646184 -n newest-cni-646184
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-646184 -n newest-cni-646184: exit status 2 (344.820808ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-646184 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-646184 -n newest-cni-646184
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-646184 -n newest-cni-646184
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.04s)

                                                
                                    

Test skip (23/342)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.87s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-557017 --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-557017" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-557017
--- SKIP: TestDownloadOnlyKic (0.87s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-501053 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-501053

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-501053

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-501053

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-501053

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-501053

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-501053

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-501053

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-501053

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-501053

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-501053

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-501053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-501053"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-501053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-501053"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-501053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-501053"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-501053

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-501053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-501053"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-501053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-501053"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-501053" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-501053" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-501053" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-501053" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-501053" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-501053" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-501053" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-501053" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-501053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-501053"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-501053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-501053"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-501053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-501053"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-501053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-501053"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-501053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-501053"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-501053

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-501053

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-501053" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-501053" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-501053

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-501053

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-501053" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-501053" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-501053" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-501053" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-501053" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-501053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-501053"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-501053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-501053"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-501053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-501053"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-501053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-501053"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-501053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-501053"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-501053

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-501053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-501053"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-501053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-501053"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-501053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-501053"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-501053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-501053"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-501053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-501053"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-501053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-501053"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-501053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-501053"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-501053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-501053"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-501053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-501053"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-501053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-501053"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-501053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-501053"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-501053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-501053"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-501053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-501053"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-501053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-501053"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-501053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-501053"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-501053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-501053"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-501053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-501053"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-501053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-501053"

                                                
                                                
----------------------- debugLogs end: cilium-501053 [took: 5.361155611s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-501053" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-501053
--- SKIP: TestNetworkPlugins/group/cilium (5.61s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-759924" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-759924
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
Copied to clipboard