Test Report: Docker_Linux_docker_arm64 19377

                    
                      81fa2899e75fb9e546311166288b8d27068854ba:2024-08-05:35656
                    
                

Test fail (3/350)

Order failed test Duration
106 TestFunctional/parallel/PersistentVolumeClaim 189.54
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 241.09
153 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 116
x
+
TestFunctional/parallel/PersistentVolumeClaim (189.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [33a98b3e-aef3-4edc-8e99-b9ab8f1c70de] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004281889s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-644345 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-644345 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-644345 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-644345 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [7916faf8-6ac9-46d3-aed5-006a182fd8d7] Pending
helpers_test.go:344: "sp-pod" [7916faf8-6ac9-46d3-aed5-006a182fd8d7] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0805 11:58:09.038913 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/addons-245337/client.crt: no such file or directory
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test_pvc_test.go:130: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 3m0s: context deadline exceeded ****
functional_test_pvc_test.go:130: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-644345 -n functional-644345
functional_test_pvc_test.go:130: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2024-08-05 11:59:58.200618163 +0000 UTC m=+830.431128235
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-644345 describe po sp-pod -n default
functional_test_pvc_test.go:130: (dbg) kubectl --context functional-644345 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-644345/192.168.49.2
Start Time:       Mon, 05 Aug 2024 11:56:57 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:  10.244.0.9
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4js6j (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-4js6j:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  3m                   default-scheduler  Successfully assigned default/sp-pod to functional-644345
Warning  Failed     3m                   kubelet            Failed to pull image "docker.io/nginx": toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   Pulling    88s (x4 over 3m)     kubelet            Pulling image "docker.io/nginx"
Warning  Failed     88s (x4 over 3m)     kubelet            Error: ErrImagePull
Warning  Failed     88s (x3 over 2m45s)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     76s (x6 over 2m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    64s (x7 over 2m59s)  kubelet            Back-off pulling image "docker.io/nginx"
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-644345 logs sp-pod -n default
functional_test_pvc_test.go:130: (dbg) Non-zero exit: kubectl --context functional-644345 logs sp-pod -n default: exit status 1 (125.60967ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:130: kubectl --context functional-644345 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:131: failed waiting for pod: test=storage-provisioner within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-644345
helpers_test.go:235: (dbg) docker inspect functional-644345:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "475a2c39b082e93db462072ea57b0e0cd96e8284cc1baab6646563089a2181bb",
	        "Created": "2024-08-05T11:53:53.118135573Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2820007,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-05T11:53:53.258508129Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2cd84ab2172023a68162f38a55db46353562cea41552fd8e8bdec97b31b2c495",
	        "ResolvConfPath": "/var/lib/docker/containers/475a2c39b082e93db462072ea57b0e0cd96e8284cc1baab6646563089a2181bb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/475a2c39b082e93db462072ea57b0e0cd96e8284cc1baab6646563089a2181bb/hostname",
	        "HostsPath": "/var/lib/docker/containers/475a2c39b082e93db462072ea57b0e0cd96e8284cc1baab6646563089a2181bb/hosts",
	        "LogPath": "/var/lib/docker/containers/475a2c39b082e93db462072ea57b0e0cd96e8284cc1baab6646563089a2181bb/475a2c39b082e93db462072ea57b0e0cd96e8284cc1baab6646563089a2181bb-json.log",
	        "Name": "/functional-644345",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-644345:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-644345",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a1c5159b684b33d00a6b109eb4e2964b34afa75821f0725dfced083ff6012ecd-init/diff:/var/lib/docker/overlay2/22b51aa5a32d3ad801f10227709a4130eadbc6472f8f1192dd08ba018deb2e68/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a1c5159b684b33d00a6b109eb4e2964b34afa75821f0725dfced083ff6012ecd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a1c5159b684b33d00a6b109eb4e2964b34afa75821f0725dfced083ff6012ecd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a1c5159b684b33d00a6b109eb4e2964b34afa75821f0725dfced083ff6012ecd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-644345",
	                "Source": "/var/lib/docker/volumes/functional-644345/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-644345",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-644345",
	                "name.minikube.sigs.k8s.io": "functional-644345",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dc65586b10d0f70dfd2718dbb1cfc1f2ec025c77d52c2cbcb3848a37c8ce2366",
	            "SandboxKey": "/var/run/docker/netns/dc65586b10d0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36443"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36444"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36447"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36445"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36446"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-644345": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "62113f275f1e97d8db3e2ecd142547662995719e9e430ce91fe8fe4bc20bbc49",
	                    "EndpointID": "f58b18697acd7a3f57f874d9a0960723ae9618211ecf4b36f64b456422dbba5d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-644345",
	                        "475a2c39b082"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-644345 -n functional-644345
helpers_test.go:244: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p functional-644345 logs -n 25: (1.203582522s)
helpers_test.go:252: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh     | functional-644345 ssh                                                    | functional-644345 | jenkins | v1.33.1 | 05 Aug 24 11:55 UTC |                     |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | functional-644345 cache reload                                           | functional-644345 | jenkins | v1.33.1 | 05 Aug 24 11:55 UTC | 05 Aug 24 11:55 UTC |
	| ssh     | functional-644345 ssh                                                    | functional-644345 | jenkins | v1.33.1 | 05 Aug 24 11:55 UTC | 05 Aug 24 11:55 UTC |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.33.1 | 05 Aug 24 11:55 UTC | 05 Aug 24 11:55 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.33.1 | 05 Aug 24 11:55 UTC | 05 Aug 24 11:55 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| kubectl | functional-644345 kubectl --                                             | functional-644345 | jenkins | v1.33.1 | 05 Aug 24 11:55 UTC | 05 Aug 24 11:55 UTC |
	|         | --context functional-644345                                              |                   |         |         |                     |                     |
	|         | get pods                                                                 |                   |         |         |                     |                     |
	| start   | -p functional-644345                                                     | functional-644345 | jenkins | v1.33.1 | 05 Aug 24 11:55 UTC | 05 Aug 24 11:56 UTC |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |         |         |                     |                     |
	|         | --wait=all                                                               |                   |         |         |                     |                     |
	| service | invalid-svc -p                                                           | functional-644345 | jenkins | v1.33.1 | 05 Aug 24 11:56 UTC |                     |
	|         | functional-644345                                                        |                   |         |         |                     |                     |
	| config  | functional-644345 config unset                                           | functional-644345 | jenkins | v1.33.1 | 05 Aug 24 11:56 UTC | 05 Aug 24 11:56 UTC |
	|         | cpus                                                                     |                   |         |         |                     |                     |
	| cp      | functional-644345 cp                                                     | functional-644345 | jenkins | v1.33.1 | 05 Aug 24 11:56 UTC | 05 Aug 24 11:56 UTC |
	|         | testdata/cp-test.txt                                                     |                   |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                 |                   |         |         |                     |                     |
	| config  | functional-644345 config get                                             | functional-644345 | jenkins | v1.33.1 | 05 Aug 24 11:56 UTC |                     |
	|         | cpus                                                                     |                   |         |         |                     |                     |
	| config  | functional-644345 config set                                             | functional-644345 | jenkins | v1.33.1 | 05 Aug 24 11:56 UTC | 05 Aug 24 11:56 UTC |
	|         | cpus 2                                                                   |                   |         |         |                     |                     |
	| config  | functional-644345 config get                                             | functional-644345 | jenkins | v1.33.1 | 05 Aug 24 11:56 UTC | 05 Aug 24 11:56 UTC |
	|         | cpus                                                                     |                   |         |         |                     |                     |
	| config  | functional-644345 config unset                                           | functional-644345 | jenkins | v1.33.1 | 05 Aug 24 11:56 UTC | 05 Aug 24 11:56 UTC |
	|         | cpus                                                                     |                   |         |         |                     |                     |
	| ssh     | functional-644345 ssh -n                                                 | functional-644345 | jenkins | v1.33.1 | 05 Aug 24 11:56 UTC | 05 Aug 24 11:56 UTC |
	|         | functional-644345 sudo cat                                               |                   |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                 |                   |         |         |                     |                     |
	| config  | functional-644345 config get                                             | functional-644345 | jenkins | v1.33.1 | 05 Aug 24 11:56 UTC |                     |
	|         | cpus                                                                     |                   |         |         |                     |                     |
	| ssh     | functional-644345 ssh echo                                               | functional-644345 | jenkins | v1.33.1 | 05 Aug 24 11:56 UTC | 05 Aug 24 11:56 UTC |
	|         | hello                                                                    |                   |         |         |                     |                     |
	| cp      | functional-644345 cp                                                     | functional-644345 | jenkins | v1.33.1 | 05 Aug 24 11:56 UTC | 05 Aug 24 11:56 UTC |
	|         | functional-644345:/home/docker/cp-test.txt                               |                   |         |         |                     |                     |
	|         | /tmp/TestFunctionalparallelCpCmd3952485624/001/cp-test.txt               |                   |         |         |                     |                     |
	| ssh     | functional-644345 ssh cat                                                | functional-644345 | jenkins | v1.33.1 | 05 Aug 24 11:56 UTC | 05 Aug 24 11:56 UTC |
	|         | /etc/hostname                                                            |                   |         |         |                     |                     |
	| ssh     | functional-644345 ssh -n                                                 | functional-644345 | jenkins | v1.33.1 | 05 Aug 24 11:56 UTC | 05 Aug 24 11:56 UTC |
	|         | functional-644345 sudo cat                                               |                   |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                 |                   |         |         |                     |                     |
	| tunnel  | functional-644345 tunnel                                                 | functional-644345 | jenkins | v1.33.1 | 05 Aug 24 11:56 UTC |                     |
	|         | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| tunnel  | functional-644345 tunnel                                                 | functional-644345 | jenkins | v1.33.1 | 05 Aug 24 11:56 UTC |                     |
	|         | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| cp      | functional-644345 cp                                                     | functional-644345 | jenkins | v1.33.1 | 05 Aug 24 11:56 UTC | 05 Aug 24 11:56 UTC |
	|         | testdata/cp-test.txt                                                     |                   |         |         |                     |                     |
	|         | /tmp/does/not/exist/cp-test.txt                                          |                   |         |         |                     |                     |
	| ssh     | functional-644345 ssh -n                                                 | functional-644345 | jenkins | v1.33.1 | 05 Aug 24 11:56 UTC | 05 Aug 24 11:56 UTC |
	|         | functional-644345 sudo cat                                               |                   |         |         |                     |                     |
	|         | /tmp/does/not/exist/cp-test.txt                                          |                   |         |         |                     |                     |
	| tunnel  | functional-644345 tunnel                                                 | functional-644345 | jenkins | v1.33.1 | 05 Aug 24 11:56 UTC |                     |
	|         | --alsologtostderr                                                        |                   |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 11:55:57
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 11:55:57.693945 2827274 out.go:291] Setting OutFile to fd 1 ...
	I0805 11:55:57.694061 2827274 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 11:55:57.694065 2827274 out.go:304] Setting ErrFile to fd 2...
	I0805 11:55:57.694069 2827274 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 11:55:57.694314 2827274 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-2789855/.minikube/bin
	I0805 11:55:57.694659 2827274 out.go:298] Setting JSON to false
	I0805 11:55:57.695636 2827274 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":70709,"bootTime":1722788249,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0805 11:55:57.695694 2827274 start.go:139] virtualization:  
	I0805 11:55:57.698948 2827274 out.go:177] * [functional-644345] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0805 11:55:57.702371 2827274 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 11:55:57.702459 2827274 notify.go:220] Checking for updates...
	I0805 11:55:57.708295 2827274 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 11:55:57.711069 2827274 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19377-2789855/kubeconfig
	I0805 11:55:57.713672 2827274 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19377-2789855/.minikube
	I0805 11:55:57.716312 2827274 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0805 11:55:57.718920 2827274 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 11:55:57.722131 2827274 config.go:182] Loaded profile config "functional-644345": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 11:55:57.722225 2827274 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 11:55:57.743733 2827274 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0805 11:55:57.743855 2827274 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0805 11:55:57.813952 2827274 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:true NGoroutines:64 SystemTime:2024-08-05 11:55:57.799042243 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0805 11:55:57.814058 2827274 docker.go:307] overlay module found
	I0805 11:55:57.817137 2827274 out.go:177] * Using the docker driver based on existing profile
	I0805 11:55:57.819795 2827274 start.go:297] selected driver: docker
	I0805 11:55:57.819804 2827274 start.go:901] validating driver "docker" against &{Name:functional-644345 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-644345 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 11:55:57.819922 2827274 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 11:55:57.820024 2827274 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0805 11:55:57.898315 2827274 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:true NGoroutines:64 SystemTime:2024-08-05 11:55:57.888779338 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0805 11:55:57.898728 2827274 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 11:55:57.898749 2827274 cni.go:84] Creating CNI manager for ""
	I0805 11:55:57.898761 2827274 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 11:55:57.898818 2827274 start.go:340] cluster config:
	{Name:functional-644345 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-644345 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 11:55:57.901665 2827274 out.go:177] * Starting "functional-644345" primary control-plane node in "functional-644345" cluster
	I0805 11:55:57.904382 2827274 cache.go:121] Beginning downloading kic base image for docker with docker
	I0805 11:55:57.907294 2827274 out.go:177] * Pulling base image v0.0.44-1721902582-19326 ...
	I0805 11:55:57.909904 2827274 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 11:55:57.909960 2827274 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19377-2789855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 11:55:57.909968 2827274 cache.go:56] Caching tarball of preloaded images
	I0805 11:55:57.910124 2827274 preload.go:172] Found /home/jenkins/minikube-integration/19377-2789855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 11:55:57.910133 2827274 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 11:55:57.910223 2827274 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0805 11:55:57.911670 2827274 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/functional-644345/config.json ...
	W0805 11:55:57.927189 2827274 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 is of wrong architecture
	I0805 11:55:57.927199 2827274 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0805 11:55:57.927286 2827274 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0805 11:55:57.927305 2827274 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0805 11:55:57.927308 2827274 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0805 11:55:57.927316 2827274 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0805 11:55:57.927320 2827274 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from local cache
	I0805 11:55:58.058111 2827274 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from cached tarball
	I0805 11:55:58.058137 2827274 cache.go:194] Successfully downloaded all kic artifacts
	I0805 11:55:58.058200 2827274 start.go:360] acquireMachinesLock for functional-644345: {Name:mkc50feaac78d4e648167b3dd0f9a2f0d677d151 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 11:55:58.058298 2827274 start.go:364] duration metric: took 71.162µs to acquireMachinesLock for "functional-644345"
	I0805 11:55:58.058320 2827274 start.go:96] Skipping create...Using existing machine configuration
	I0805 11:55:58.058325 2827274 fix.go:54] fixHost starting: 
	I0805 11:55:58.058874 2827274 cli_runner.go:164] Run: docker container inspect functional-644345 --format={{.State.Status}}
	I0805 11:55:58.079072 2827274 fix.go:112] recreateIfNeeded on functional-644345: state=Running err=<nil>
	W0805 11:55:58.079091 2827274 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 11:55:58.083732 2827274 out.go:177] * Updating the running docker "functional-644345" container ...
	I0805 11:55:58.086275 2827274 machine.go:94] provisionDockerMachine start ...
	I0805 11:55:58.086382 2827274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644345
	I0805 11:55:58.103172 2827274 main.go:141] libmachine: Using SSH client type: native
	I0805 11:55:58.103428 2827274 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 36443 <nil> <nil>}
	I0805 11:55:58.103435 2827274 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 11:55:58.235982 2827274 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-644345
	
	I0805 11:55:58.235997 2827274 ubuntu.go:169] provisioning hostname "functional-644345"
	I0805 11:55:58.236063 2827274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644345
	I0805 11:55:58.255866 2827274 main.go:141] libmachine: Using SSH client type: native
	I0805 11:55:58.256122 2827274 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 36443 <nil> <nil>}
	I0805 11:55:58.256131 2827274 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-644345 && echo "functional-644345" | sudo tee /etc/hostname
	I0805 11:55:58.400851 2827274 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-644345
	
	I0805 11:55:58.400921 2827274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644345
	I0805 11:55:58.419302 2827274 main.go:141] libmachine: Using SSH client type: native
	I0805 11:55:58.419541 2827274 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 36443 <nil> <nil>}
	I0805 11:55:58.419556 2827274 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-644345' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-644345/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-644345' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 11:55:58.552493 2827274 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 11:55:58.552508 2827274 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19377-2789855/.minikube CaCertPath:/home/jenkins/minikube-integration/19377-2789855/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19377-2789855/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19377-2789855/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19377-2789855/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19377-2789855/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19377-2789855/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19377-2789855/.minikube}
	I0805 11:55:58.552530 2827274 ubuntu.go:177] setting up certificates
	I0805 11:55:58.552540 2827274 provision.go:84] configureAuth start
	I0805 11:55:58.552602 2827274 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-644345
	I0805 11:55:58.569637 2827274 provision.go:143] copyHostCerts
	I0805 11:55:58.569705 2827274 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-2789855/.minikube/cert.pem, removing ...
	I0805 11:55:58.569724 2827274 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-2789855/.minikube/cert.pem
	I0805 11:55:58.569799 2827274 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-2789855/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19377-2789855/.minikube/cert.pem (1123 bytes)
	I0805 11:55:58.569906 2827274 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-2789855/.minikube/key.pem, removing ...
	I0805 11:55:58.569910 2827274 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-2789855/.minikube/key.pem
	I0805 11:55:58.569934 2827274 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-2789855/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19377-2789855/.minikube/key.pem (1679 bytes)
	I0805 11:55:58.569986 2827274 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-2789855/.minikube/ca.pem, removing ...
	I0805 11:55:58.569989 2827274 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-2789855/.minikube/ca.pem
	I0805 11:55:58.570011 2827274 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-2789855/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19377-2789855/.minikube/ca.pem (1078 bytes)
	I0805 11:55:58.570055 2827274 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19377-2789855/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19377-2789855/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19377-2789855/.minikube/certs/ca-key.pem org=jenkins.functional-644345 san=[127.0.0.1 192.168.49.2 functional-644345 localhost minikube]
	I0805 11:55:59.016930 2827274 provision.go:177] copyRemoteCerts
	I0805 11:55:59.016984 2827274 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 11:55:59.017030 2827274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644345
	I0805 11:55:59.036457 2827274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36443 SSHKeyPath:/home/jenkins/minikube-integration/19377-2789855/.minikube/machines/functional-644345/id_rsa Username:docker}
	I0805 11:55:59.134480 2827274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-2789855/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0805 11:55:59.159943 2827274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-2789855/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0805 11:55:59.184695 2827274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-2789855/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0805 11:55:59.210418 2827274 provision.go:87] duration metric: took 657.865412ms to configureAuth
	I0805 11:55:59.210436 2827274 ubuntu.go:193] setting minikube options for container-runtime
	I0805 11:55:59.210628 2827274 config.go:182] Loaded profile config "functional-644345": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 11:55:59.210685 2827274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644345
	I0805 11:55:59.227841 2827274 main.go:141] libmachine: Using SSH client type: native
	I0805 11:55:59.228093 2827274 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 36443 <nil> <nil>}
	I0805 11:55:59.228101 2827274 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0805 11:55:59.361086 2827274 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0805 11:55:59.361098 2827274 ubuntu.go:71] root file system type: overlay
	I0805 11:55:59.361215 2827274 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0805 11:55:59.361280 2827274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644345
	I0805 11:55:59.378934 2827274 main.go:141] libmachine: Using SSH client type: native
	I0805 11:55:59.379177 2827274 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 36443 <nil> <nil>}
	I0805 11:55:59.379251 2827274 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0805 11:55:59.529054 2827274 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0805 11:55:59.529140 2827274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644345
	I0805 11:55:59.547316 2827274 main.go:141] libmachine: Using SSH client type: native
	I0805 11:55:59.547550 2827274 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 36443 <nil> <nil>}
	I0805 11:55:59.547574 2827274 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0805 11:55:59.685959 2827274 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 11:55:59.685972 2827274 machine.go:97] duration metric: took 1.59968455s to provisionDockerMachine
	I0805 11:55:59.685982 2827274 start.go:293] postStartSetup for "functional-644345" (driver="docker")
	I0805 11:55:59.685993 2827274 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 11:55:59.686055 2827274 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 11:55:59.686093 2827274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644345
	I0805 11:55:59.704191 2827274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36443 SSHKeyPath:/home/jenkins/minikube-integration/19377-2789855/.minikube/machines/functional-644345/id_rsa Username:docker}
	I0805 11:55:59.807892 2827274 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 11:55:59.812795 2827274 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0805 11:55:59.812820 2827274 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0805 11:55:59.812828 2827274 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0805 11:55:59.812834 2827274 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0805 11:55:59.812844 2827274 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-2789855/.minikube/addons for local assets ...
	I0805 11:55:59.812897 2827274 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-2789855/.minikube/files for local assets ...
	I0805 11:55:59.812972 2827274 filesync.go:149] local asset: /home/jenkins/minikube-integration/19377-2789855/.minikube/files/etc/ssl/certs/27952332.pem -> 27952332.pem in /etc/ssl/certs
	I0805 11:55:59.813050 2827274 filesync.go:149] local asset: /home/jenkins/minikube-integration/19377-2789855/.minikube/files/etc/test/nested/copy/2795233/hosts -> hosts in /etc/test/nested/copy/2795233
	I0805 11:55:59.813102 2827274 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/2795233
	I0805 11:55:59.822305 2827274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-2789855/.minikube/files/etc/ssl/certs/27952332.pem --> /etc/ssl/certs/27952332.pem (1708 bytes)
	I0805 11:55:59.847602 2827274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-2789855/.minikube/files/etc/test/nested/copy/2795233/hosts --> /etc/test/nested/copy/2795233/hosts (40 bytes)
	I0805 11:55:59.877861 2827274 start.go:296] duration metric: took 191.864827ms for postStartSetup
	I0805 11:55:59.877952 2827274 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 11:55:59.877991 2827274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644345
	I0805 11:55:59.904684 2827274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36443 SSHKeyPath:/home/jenkins/minikube-integration/19377-2789855/.minikube/machines/functional-644345/id_rsa Username:docker}
	I0805 11:55:59.997168 2827274 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0805 11:56:00.002333 2827274 fix.go:56] duration metric: took 1.943996156s for fixHost
	I0805 11:56:00.002354 2827274 start.go:83] releasing machines lock for "functional-644345", held for 1.944046814s
	I0805 11:56:00.002458 2827274 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-644345
	I0805 11:56:00.117806 2827274 ssh_runner.go:195] Run: cat /version.json
	I0805 11:56:00.117868 2827274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644345
	I0805 11:56:00.119868 2827274 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 11:56:00.119986 2827274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644345
	I0805 11:56:00.174273 2827274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36443 SSHKeyPath:/home/jenkins/minikube-integration/19377-2789855/.minikube/machines/functional-644345/id_rsa Username:docker}
	I0805 11:56:00.182212 2827274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36443 SSHKeyPath:/home/jenkins/minikube-integration/19377-2789855/.minikube/machines/functional-644345/id_rsa Username:docker}
	I0805 11:56:00.631510 2827274 ssh_runner.go:195] Run: systemctl --version
	I0805 11:56:00.636666 2827274 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0805 11:56:00.642299 2827274 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0805 11:56:00.667210 2827274 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0805 11:56:00.667286 2827274 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 11:56:00.679742 2827274 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0805 11:56:00.679760 2827274 start.go:495] detecting cgroup driver to use...
	I0805 11:56:00.679805 2827274 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0805 11:56:00.679911 2827274 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 11:56:00.701503 2827274 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0805 11:56:00.713210 2827274 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0805 11:56:00.725523 2827274 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0805 11:56:00.725588 2827274 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0805 11:56:00.737345 2827274 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 11:56:00.749105 2827274 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0805 11:56:00.760085 2827274 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 11:56:00.771939 2827274 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 11:56:00.781764 2827274 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0805 11:56:00.792732 2827274 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0805 11:56:00.811747 2827274 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0805 11:56:00.822691 2827274 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 11:56:00.831947 2827274 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 11:56:00.841135 2827274 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 11:56:00.958643 2827274 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0805 11:56:11.340760 2827274 ssh_runner.go:235] Completed: sudo systemctl restart containerd: (10.382093286s)
	I0805 11:56:11.340777 2827274 start.go:495] detecting cgroup driver to use...
	I0805 11:56:11.340811 2827274 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0805 11:56:11.340872 2827274 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0805 11:56:11.357072 2827274 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0805 11:56:11.357140 2827274 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 11:56:11.370510 2827274 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 11:56:11.387322 2827274 ssh_runner.go:195] Run: which cri-dockerd
	I0805 11:56:11.391769 2827274 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0805 11:56:11.400977 2827274 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0805 11:56:11.422194 2827274 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0805 11:56:11.521219 2827274 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0805 11:56:11.613068 2827274 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0805 11:56:11.613196 2827274 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0805 11:56:11.635882 2827274 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 11:56:11.754545 2827274 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 11:56:12.302488 2827274 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0805 11:56:12.314479 2827274 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0805 11:56:12.332061 2827274 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0805 11:56:12.345380 2827274 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0805 11:56:12.457840 2827274 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0805 11:56:12.556988 2827274 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 11:56:12.657719 2827274 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0805 11:56:12.672136 2827274 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0805 11:56:12.683869 2827274 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 11:56:12.782053 2827274 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0805 11:56:12.872120 2827274 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0805 11:56:12.872191 2827274 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0805 11:56:12.876719 2827274 start.go:563] Will wait 60s for crictl version
	I0805 11:56:12.876791 2827274 ssh_runner.go:195] Run: which crictl
	I0805 11:56:12.880769 2827274 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 11:56:12.916994 2827274 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0805 11:56:12.917054 2827274 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0805 11:56:12.958644 2827274 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0805 11:56:13.008433 2827274 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0805 11:56:13.008541 2827274 cli_runner.go:164] Run: docker network inspect functional-644345 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0805 11:56:13.029716 2827274 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0805 11:56:13.037776 2827274 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0805 11:56:13.039345 2827274 kubeadm.go:883] updating cluster {Name:functional-644345 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-644345 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 11:56:13.039474 2827274 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 11:56:13.039553 2827274 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0805 11:56:13.074749 2827274 docker.go:685] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-644345
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0805 11:56:13.074762 2827274 docker.go:615] Images already preloaded, skipping extraction
	I0805 11:56:13.074827 2827274 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0805 11:56:13.107633 2827274 docker.go:685] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-644345
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0805 11:56:13.107648 2827274 cache_images.go:84] Images are preloaded, skipping loading
	I0805 11:56:13.107657 2827274 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.30.3 docker true true} ...
	I0805 11:56:13.107779 2827274 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-644345 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:functional-644345 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 11:56:13.107848 2827274 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0805 11:56:13.284601 2827274 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0805 11:56:13.284675 2827274 cni.go:84] Creating CNI manager for ""
	I0805 11:56:13.284689 2827274 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 11:56:13.284698 2827274 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 11:56:13.284717 2827274 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-644345 NodeName:functional-644345 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:
map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 11:56:13.284869 2827274 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-644345"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 11:56:13.284937 2827274 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 11:56:13.296363 2827274 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 11:56:13.296434 2827274 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 11:56:13.307140 2827274 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0805 11:56:13.355913 2827274 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 11:56:13.410604 2827274 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2009 bytes)
	I0805 11:56:13.453588 2827274 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0805 11:56:13.458412 2827274 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 11:56:13.606858 2827274 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 11:56:13.634796 2827274 certs.go:68] Setting up /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/functional-644345 for IP: 192.168.49.2
	I0805 11:56:13.634808 2827274 certs.go:194] generating shared ca certs ...
	I0805 11:56:13.634823 2827274 certs.go:226] acquiring lock for ca certs: {Name:mkf68c149df12db9e13780ffd3b31cf9e53de863 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:56:13.634957 2827274 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19377-2789855/.minikube/ca.key
	I0805 11:56:13.635003 2827274 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19377-2789855/.minikube/proxy-client-ca.key
	I0805 11:56:13.635008 2827274 certs.go:256] generating profile certs ...
	I0805 11:56:13.635089 2827274 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/functional-644345/client.key
	I0805 11:56:13.635133 2827274 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/functional-644345/apiserver.key.f50bff35
	I0805 11:56:13.635170 2827274 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/functional-644345/proxy-client.key
	I0805 11:56:13.635277 2827274 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-2789855/.minikube/certs/2795233.pem (1338 bytes)
	W0805 11:56:13.635303 2827274 certs.go:480] ignoring /home/jenkins/minikube-integration/19377-2789855/.minikube/certs/2795233_empty.pem, impossibly tiny 0 bytes
	I0805 11:56:13.635311 2827274 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-2789855/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 11:56:13.635337 2827274 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-2789855/.minikube/certs/ca.pem (1078 bytes)
	I0805 11:56:13.635359 2827274 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-2789855/.minikube/certs/cert.pem (1123 bytes)
	I0805 11:56:13.635380 2827274 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-2789855/.minikube/certs/key.pem (1679 bytes)
	I0805 11:56:13.635419 2827274 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-2789855/.minikube/files/etc/ssl/certs/27952332.pem (1708 bytes)
	I0805 11:56:13.636023 2827274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-2789855/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 11:56:13.679559 2827274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-2789855/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0805 11:56:13.716948 2827274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-2789855/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 11:56:13.829266 2827274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-2789855/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0805 11:56:13.926272 2827274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/functional-644345/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0805 11:56:13.995523 2827274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/functional-644345/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0805 11:56:14.115937 2827274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/functional-644345/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 11:56:14.176581 2827274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/functional-644345/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0805 11:56:14.387984 2827274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-2789855/.minikube/certs/2795233.pem --> /usr/share/ca-certificates/2795233.pem (1338 bytes)
	I0805 11:56:14.445340 2827274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-2789855/.minikube/files/etc/ssl/certs/27952332.pem --> /usr/share/ca-certificates/27952332.pem (1708 bytes)
	I0805 11:56:14.556431 2827274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-2789855/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 11:56:14.690894 2827274 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 11:56:14.753967 2827274 ssh_runner.go:195] Run: openssl version
	I0805 11:56:14.760052 2827274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 11:56:14.771030 2827274 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 11:56:14.776620 2827274 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 11:47 /usr/share/ca-certificates/minikubeCA.pem
	I0805 11:56:14.776687 2827274 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 11:56:14.802558 2827274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 11:56:14.826450 2827274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2795233.pem && ln -fs /usr/share/ca-certificates/2795233.pem /etc/ssl/certs/2795233.pem"
	I0805 11:56:14.847324 2827274 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2795233.pem
	I0805 11:56:14.856806 2827274 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 11:53 /usr/share/ca-certificates/2795233.pem
	I0805 11:56:14.856866 2827274 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2795233.pem
	I0805 11:56:14.866009 2827274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2795233.pem /etc/ssl/certs/51391683.0"
	I0805 11:56:14.886776 2827274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27952332.pem && ln -fs /usr/share/ca-certificates/27952332.pem /etc/ssl/certs/27952332.pem"
	I0805 11:56:14.906626 2827274 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27952332.pem
	I0805 11:56:14.910088 2827274 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 11:53 /usr/share/ca-certificates/27952332.pem
	I0805 11:56:14.910147 2827274 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27952332.pem
	I0805 11:56:14.925423 2827274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/27952332.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 11:56:14.951793 2827274 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 11:56:14.955403 2827274 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0805 11:56:14.968958 2827274 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0805 11:56:14.979458 2827274 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0805 11:56:14.986741 2827274 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0805 11:56:14.993966 2827274 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0805 11:56:15.001054 2827274 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0805 11:56:15.009367 2827274 kubeadm.go:392] StartCluster: {Name:functional-644345 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-644345 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 11:56:15.009528 2827274 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0805 11:56:15.029715 2827274 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 11:56:15.040663 2827274 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0805 11:56:15.040673 2827274 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0805 11:56:15.040731 2827274 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0805 11:56:15.051170 2827274 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0805 11:56:15.051789 2827274 kubeconfig.go:125] found "functional-644345" server: "https://192.168.49.2:8441"
	I0805 11:56:15.053893 2827274 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0805 11:56:15.069948 2827274 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2024-08-05 11:54:03.148970323 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2024-08-05 11:56:13.448098338 +0000
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0805 11:56:15.069966 2827274 kubeadm.go:1160] stopping kube-system containers ...
	I0805 11:56:15.070037 2827274 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0805 11:56:15.110885 2827274 docker.go:483] Stopping containers: [19a735639c57 32f524fefe4a c797bab538ca 3e00484e3be0 a8c0721274ad 55417568c28d 272dc1fcdd9e 2f134b2e41a8 b55b3ccef27e 91df8803f2ef 4502de65a9e1 f3d08f679e92 96b7b9f5153e bdf3e5498c95 e3a79a3b7215 04d98ecb2cf4 e52d9a9979f3 c7d982acfbad 8ce6f28d04e9 120e37448711 d7946ad36219 65db6ffe51e9 b838b3c1260b 341891d2fe8c 9b84d2913ca4 1ee9841cd504 2fe5364d8906 0aea6cf8ca35 a0feaed1a256 6a3aa1b2d857 b5f9058c6fce 6a22ec4d4c5d 3128c8ccffb4]
	I0805 11:56:15.110978 2827274 ssh_runner.go:195] Run: docker stop 19a735639c57 32f524fefe4a c797bab538ca 3e00484e3be0 a8c0721274ad 55417568c28d 272dc1fcdd9e 2f134b2e41a8 b55b3ccef27e 91df8803f2ef 4502de65a9e1 f3d08f679e92 96b7b9f5153e bdf3e5498c95 e3a79a3b7215 04d98ecb2cf4 e52d9a9979f3 c7d982acfbad 8ce6f28d04e9 120e37448711 d7946ad36219 65db6ffe51e9 b838b3c1260b 341891d2fe8c 9b84d2913ca4 1ee9841cd504 2fe5364d8906 0aea6cf8ca35 a0feaed1a256 6a3aa1b2d857 b5f9058c6fce 6a22ec4d4c5d 3128c8ccffb4
	I0805 11:56:16.073513 2827274 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0805 11:56:16.159793 2827274 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 11:56:16.173849 2827274 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5651 Aug  5 11:54 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Aug  5 11:54 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Aug  5 11:54 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Aug  5 11:54 /etc/kubernetes/scheduler.conf
	
	I0805 11:56:16.173909 2827274 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0805 11:56:16.190783 2827274 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0805 11:56:16.203478 2827274 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0805 11:56:16.218847 2827274 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0805 11:56:16.218927 2827274 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 11:56:16.231789 2827274 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0805 11:56:16.244316 2827274 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0805 11:56:16.244382 2827274 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 11:56:16.262809 2827274 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 11:56:16.275717 2827274 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 11:56:16.375583 2827274 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 11:56:18.693297 2827274 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.317686347s)
	I0805 11:56:18.693315 2827274 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0805 11:56:18.856864 2827274 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 11:56:18.938998 2827274 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0805 11:56:19.018369 2827274 api_server.go:52] waiting for apiserver process to appear ...
	I0805 11:56:19.018432 2827274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 11:56:19.518542 2827274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 11:56:20.019327 2827274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 11:56:20.518569 2827274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 11:56:20.544615 2827274 api_server.go:72] duration metric: took 1.526246184s to wait for apiserver process to appear ...
	I0805 11:56:20.544632 2827274 api_server.go:88] waiting for apiserver healthz status ...
	I0805 11:56:20.544651 2827274 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0805 11:56:23.646936 2827274 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0805 11:56:23.646954 2827274 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0805 11:56:23.646967 2827274 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0805 11:56:23.706834 2827274 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0805 11:56:23.706853 2827274 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0805 11:56:24.045370 2827274 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0805 11:56:24.056829 2827274 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 11:56:24.056862 2827274 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 11:56:24.545342 2827274 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0805 11:56:24.553113 2827274 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 11:56:24.553136 2827274 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 11:56:25.044720 2827274 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0805 11:56:25.052740 2827274 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I0805 11:56:25.066478 2827274 api_server.go:141] control plane version: v1.30.3
	I0805 11:56:25.066498 2827274 api_server.go:131] duration metric: took 4.521860947s to wait for apiserver health ...
	I0805 11:56:25.066506 2827274 cni.go:84] Creating CNI manager for ""
	I0805 11:56:25.066517 2827274 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 11:56:25.069608 2827274 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0805 11:56:25.072316 2827274 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0805 11:56:25.084722 2827274 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0805 11:56:25.109900 2827274 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 11:56:25.121312 2827274 system_pods.go:59] 7 kube-system pods found
	I0805 11:56:25.121335 2827274 system_pods.go:61] "coredns-7db6d8ff4d-rznxg" [6cb35c48-f0fa-4441-84f0-6378d320b427] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0805 11:56:25.121342 2827274 system_pods.go:61] "etcd-functional-644345" [58c32004-eaf5-4ad2-95dc-87b3ea92fefe] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0805 11:56:25.121350 2827274 system_pods.go:61] "kube-apiserver-functional-644345" [8354ea00-5b32-4bc0-ae24-758c6808e914] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0805 11:56:25.121358 2827274 system_pods.go:61] "kube-controller-manager-functional-644345" [60fd324e-ec79-4937-94a1-f7ac6b0d7bfb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0805 11:56:25.121363 2827274 system_pods.go:61] "kube-proxy-lgl7w" [15952683-b4f7-4a4e-824a-f3e88a98c26f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0805 11:56:25.121369 2827274 system_pods.go:61] "kube-scheduler-functional-644345" [7e70b355-fd2d-41f1-a3e4-8fc93d2b84c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0805 11:56:25.121374 2827274 system_pods.go:61] "storage-provisioner" [33a98b3e-aef3-4edc-8e99-b9ab8f1c70de] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0805 11:56:25.121380 2827274 system_pods.go:74] duration metric: took 11.468518ms to wait for pod list to return data ...
	I0805 11:56:25.121388 2827274 node_conditions.go:102] verifying NodePressure condition ...
	I0805 11:56:25.125598 2827274 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0805 11:56:25.125620 2827274 node_conditions.go:123] node cpu capacity is 2
	I0805 11:56:25.125639 2827274 node_conditions.go:105] duration metric: took 4.246741ms to run NodePressure ...
	I0805 11:56:25.125657 2827274 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 11:56:25.400395 2827274 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0805 11:56:25.405381 2827274 kubeadm.go:739] kubelet initialised
	I0805 11:56:25.405391 2827274 kubeadm.go:740] duration metric: took 4.982522ms waiting for restarted kubelet to initialise ...
	I0805 11:56:25.405398 2827274 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 11:56:25.415112 2827274 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-rznxg" in "kube-system" namespace to be "Ready" ...
	I0805 11:56:27.421683 2827274 pod_ready.go:102] pod "coredns-7db6d8ff4d-rznxg" in "kube-system" namespace has status "Ready":"False"
	I0805 11:56:28.923189 2827274 pod_ready.go:92] pod "coredns-7db6d8ff4d-rznxg" in "kube-system" namespace has status "Ready":"True"
	I0805 11:56:28.923201 2827274 pod_ready.go:81] duration metric: took 3.50807401s for pod "coredns-7db6d8ff4d-rznxg" in "kube-system" namespace to be "Ready" ...
	I0805 11:56:28.923210 2827274 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-644345" in "kube-system" namespace to be "Ready" ...
	I0805 11:56:30.929225 2827274 pod_ready.go:102] pod "etcd-functional-644345" in "kube-system" namespace has status "Ready":"False"
	I0805 11:56:32.930935 2827274 pod_ready.go:92] pod "etcd-functional-644345" in "kube-system" namespace has status "Ready":"True"
	I0805 11:56:32.930951 2827274 pod_ready.go:81] duration metric: took 4.007730444s for pod "etcd-functional-644345" in "kube-system" namespace to be "Ready" ...
	I0805 11:56:32.930960 2827274 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-644345" in "kube-system" namespace to be "Ready" ...
	I0805 11:56:34.937419 2827274 pod_ready.go:102] pod "kube-apiserver-functional-644345" in "kube-system" namespace has status "Ready":"False"
	I0805 11:56:37.437335 2827274 pod_ready.go:102] pod "kube-apiserver-functional-644345" in "kube-system" namespace has status "Ready":"False"
	I0805 11:56:38.937254 2827274 pod_ready.go:92] pod "kube-apiserver-functional-644345" in "kube-system" namespace has status "Ready":"True"
	I0805 11:56:38.937266 2827274 pod_ready.go:81] duration metric: took 6.006298812s for pod "kube-apiserver-functional-644345" in "kube-system" namespace to be "Ready" ...
	I0805 11:56:38.937276 2827274 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-644345" in "kube-system" namespace to be "Ready" ...
	I0805 11:56:38.943319 2827274 pod_ready.go:92] pod "kube-controller-manager-functional-644345" in "kube-system" namespace has status "Ready":"True"
	I0805 11:56:38.943330 2827274 pod_ready.go:81] duration metric: took 6.048205ms for pod "kube-controller-manager-functional-644345" in "kube-system" namespace to be "Ready" ...
	I0805 11:56:38.943339 2827274 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-lgl7w" in "kube-system" namespace to be "Ready" ...
	I0805 11:56:38.948847 2827274 pod_ready.go:92] pod "kube-proxy-lgl7w" in "kube-system" namespace has status "Ready":"True"
	I0805 11:56:38.948858 2827274 pod_ready.go:81] duration metric: took 5.513349ms for pod "kube-proxy-lgl7w" in "kube-system" namespace to be "Ready" ...
	I0805 11:56:38.948868 2827274 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-644345" in "kube-system" namespace to be "Ready" ...
	I0805 11:56:38.954203 2827274 pod_ready.go:92] pod "kube-scheduler-functional-644345" in "kube-system" namespace has status "Ready":"True"
	I0805 11:56:38.954214 2827274 pod_ready.go:81] duration metric: took 5.339451ms for pod "kube-scheduler-functional-644345" in "kube-system" namespace to be "Ready" ...
	I0805 11:56:38.954224 2827274 pod_ready.go:38] duration metric: took 13.548818162s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 11:56:38.954239 2827274 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0805 11:56:38.961763 2827274 ops.go:34] apiserver oom_adj: -16
	I0805 11:56:38.961775 2827274 kubeadm.go:597] duration metric: took 23.921097045s to restartPrimaryControlPlane
	I0805 11:56:38.961783 2827274 kubeadm.go:394] duration metric: took 23.952428888s to StartCluster
	I0805 11:56:38.961798 2827274 settings.go:142] acquiring lock: {Name:mk4a577f0ff710c971661155cffa585f8a233d3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:56:38.961865 2827274 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19377-2789855/kubeconfig
	I0805 11:56:38.962514 2827274 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-2789855/kubeconfig: {Name:mk43b20405f936d4b5b0f71673ce55a0d9a036ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 11:56:38.962731 2827274 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 11:56:38.962982 2827274 config.go:182] Loaded profile config "functional-644345": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 11:56:38.963012 2827274 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0805 11:56:38.963068 2827274 addons.go:69] Setting storage-provisioner=true in profile "functional-644345"
	I0805 11:56:38.963090 2827274 addons.go:234] Setting addon storage-provisioner=true in "functional-644345"
	W0805 11:56:38.963095 2827274 addons.go:243] addon storage-provisioner should already be in state true
	I0805 11:56:38.963100 2827274 addons.go:69] Setting default-storageclass=true in profile "functional-644345"
	I0805 11:56:38.963113 2827274 host.go:66] Checking if "functional-644345" exists ...
	I0805 11:56:38.963136 2827274 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-644345"
	I0805 11:56:38.963407 2827274 cli_runner.go:164] Run: docker container inspect functional-644345 --format={{.State.Status}}
	I0805 11:56:38.963505 2827274 cli_runner.go:164] Run: docker container inspect functional-644345 --format={{.State.Status}}
	I0805 11:56:38.966933 2827274 out.go:177] * Verifying Kubernetes components...
	I0805 11:56:38.969902 2827274 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 11:56:38.991189 2827274 addons.go:234] Setting addon default-storageclass=true in "functional-644345"
	W0805 11:56:38.991287 2827274 addons.go:243] addon default-storageclass should already be in state true
	I0805 11:56:38.991315 2827274 host.go:66] Checking if "functional-644345" exists ...
	I0805 11:56:38.991719 2827274 cli_runner.go:164] Run: docker container inspect functional-644345 --format={{.State.Status}}
	I0805 11:56:38.995605 2827274 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 11:56:38.998233 2827274 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 11:56:38.998245 2827274 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0805 11:56:38.998318 2827274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644345
	I0805 11:56:39.015268 2827274 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0805 11:56:39.015280 2827274 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0805 11:56:39.015345 2827274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644345
	I0805 11:56:39.033230 2827274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36443 SSHKeyPath:/home/jenkins/minikube-integration/19377-2789855/.minikube/machines/functional-644345/id_rsa Username:docker}
	I0805 11:56:39.049921 2827274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36443 SSHKeyPath:/home/jenkins/minikube-integration/19377-2789855/.minikube/machines/functional-644345/id_rsa Username:docker}
	I0805 11:56:39.152251 2827274 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 11:56:39.180381 2827274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 11:56:39.190694 2827274 node_ready.go:35] waiting up to 6m0s for node "functional-644345" to be "Ready" ...
	I0805 11:56:39.194347 2827274 node_ready.go:49] node "functional-644345" has status "Ready":"True"
	I0805 11:56:39.194359 2827274 node_ready.go:38] duration metric: took 3.645744ms for node "functional-644345" to be "Ready" ...
	I0805 11:56:39.194368 2827274 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 11:56:39.201114 2827274 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rznxg" in "kube-system" namespace to be "Ready" ...
	I0805 11:56:39.264085 2827274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0805 11:56:39.335083 2827274 pod_ready.go:92] pod "coredns-7db6d8ff4d-rznxg" in "kube-system" namespace has status "Ready":"True"
	I0805 11:56:39.335095 2827274 pod_ready.go:81] duration metric: took 133.956963ms for pod "coredns-7db6d8ff4d-rznxg" in "kube-system" namespace to be "Ready" ...
	I0805 11:56:39.335104 2827274 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-644345" in "kube-system" namespace to be "Ready" ...
	I0805 11:56:39.736734 2827274 pod_ready.go:92] pod "etcd-functional-644345" in "kube-system" namespace has status "Ready":"True"
	I0805 11:56:39.736747 2827274 pod_ready.go:81] duration metric: took 401.636357ms for pod "etcd-functional-644345" in "kube-system" namespace to be "Ready" ...
	I0805 11:56:39.736756 2827274 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-644345" in "kube-system" namespace to be "Ready" ...
	I0805 11:56:39.972097 2827274 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0805 11:56:39.974641 2827274 addons.go:510] duration metric: took 1.01162047s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0805 11:56:40.136039 2827274 pod_ready.go:92] pod "kube-apiserver-functional-644345" in "kube-system" namespace has status "Ready":"True"
	I0805 11:56:40.136057 2827274 pod_ready.go:81] duration metric: took 399.29339ms for pod "kube-apiserver-functional-644345" in "kube-system" namespace to be "Ready" ...
	I0805 11:56:40.136074 2827274 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-644345" in "kube-system" namespace to be "Ready" ...
	I0805 11:56:40.534908 2827274 pod_ready.go:92] pod "kube-controller-manager-functional-644345" in "kube-system" namespace has status "Ready":"True"
	I0805 11:56:40.534920 2827274 pod_ready.go:81] duration metric: took 398.838894ms for pod "kube-controller-manager-functional-644345" in "kube-system" namespace to be "Ready" ...
	I0805 11:56:40.534931 2827274 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lgl7w" in "kube-system" namespace to be "Ready" ...
	I0805 11:56:40.934944 2827274 pod_ready.go:92] pod "kube-proxy-lgl7w" in "kube-system" namespace has status "Ready":"True"
	I0805 11:56:40.934956 2827274 pod_ready.go:81] duration metric: took 400.019169ms for pod "kube-proxy-lgl7w" in "kube-system" namespace to be "Ready" ...
	I0805 11:56:40.934967 2827274 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-644345" in "kube-system" namespace to be "Ready" ...
	I0805 11:56:41.334640 2827274 pod_ready.go:92] pod "kube-scheduler-functional-644345" in "kube-system" namespace has status "Ready":"True"
	I0805 11:56:41.334652 2827274 pod_ready.go:81] duration metric: took 399.678519ms for pod "kube-scheduler-functional-644345" in "kube-system" namespace to be "Ready" ...
	I0805 11:56:41.334663 2827274 pod_ready.go:38] duration metric: took 2.140285876s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 11:56:41.334682 2827274 api_server.go:52] waiting for apiserver process to appear ...
	I0805 11:56:41.334760 2827274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 11:56:41.347680 2827274 api_server.go:72] duration metric: took 2.384920838s to wait for apiserver process to appear ...
	I0805 11:56:41.347696 2827274 api_server.go:88] waiting for apiserver healthz status ...
	I0805 11:56:41.347715 2827274 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0805 11:56:41.356292 2827274 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I0805 11:56:41.357501 2827274 api_server.go:141] control plane version: v1.30.3
	I0805 11:56:41.357516 2827274 api_server.go:131] duration metric: took 9.815179ms to wait for apiserver health ...
	I0805 11:56:41.357523 2827274 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 11:56:41.538606 2827274 system_pods.go:59] 7 kube-system pods found
	I0805 11:56:41.538622 2827274 system_pods.go:61] "coredns-7db6d8ff4d-rznxg" [6cb35c48-f0fa-4441-84f0-6378d320b427] Running
	I0805 11:56:41.538626 2827274 system_pods.go:61] "etcd-functional-644345" [58c32004-eaf5-4ad2-95dc-87b3ea92fefe] Running
	I0805 11:56:41.538630 2827274 system_pods.go:61] "kube-apiserver-functional-644345" [8354ea00-5b32-4bc0-ae24-758c6808e914] Running
	I0805 11:56:41.538634 2827274 system_pods.go:61] "kube-controller-manager-functional-644345" [60fd324e-ec79-4937-94a1-f7ac6b0d7bfb] Running
	I0805 11:56:41.538637 2827274 system_pods.go:61] "kube-proxy-lgl7w" [15952683-b4f7-4a4e-824a-f3e88a98c26f] Running
	I0805 11:56:41.538639 2827274 system_pods.go:61] "kube-scheduler-functional-644345" [7e70b355-fd2d-41f1-a3e4-8fc93d2b84c3] Running
	I0805 11:56:41.538642 2827274 system_pods.go:61] "storage-provisioner" [33a98b3e-aef3-4edc-8e99-b9ab8f1c70de] Running
	I0805 11:56:41.538647 2827274 system_pods.go:74] duration metric: took 181.11877ms to wait for pod list to return data ...
	I0805 11:56:41.538654 2827274 default_sa.go:34] waiting for default service account to be created ...
	I0805 11:56:41.734814 2827274 default_sa.go:45] found service account: "default"
	I0805 11:56:41.734830 2827274 default_sa.go:55] duration metric: took 196.168581ms for default service account to be created ...
	I0805 11:56:41.734838 2827274 system_pods.go:116] waiting for k8s-apps to be running ...
	I0805 11:56:41.938240 2827274 system_pods.go:86] 7 kube-system pods found
	I0805 11:56:41.938255 2827274 system_pods.go:89] "coredns-7db6d8ff4d-rznxg" [6cb35c48-f0fa-4441-84f0-6378d320b427] Running
	I0805 11:56:41.938260 2827274 system_pods.go:89] "etcd-functional-644345" [58c32004-eaf5-4ad2-95dc-87b3ea92fefe] Running
	I0805 11:56:41.938264 2827274 system_pods.go:89] "kube-apiserver-functional-644345" [8354ea00-5b32-4bc0-ae24-758c6808e914] Running
	I0805 11:56:41.938268 2827274 system_pods.go:89] "kube-controller-manager-functional-644345" [60fd324e-ec79-4937-94a1-f7ac6b0d7bfb] Running
	I0805 11:56:41.938271 2827274 system_pods.go:89] "kube-proxy-lgl7w" [15952683-b4f7-4a4e-824a-f3e88a98c26f] Running
	I0805 11:56:41.938274 2827274 system_pods.go:89] "kube-scheduler-functional-644345" [7e70b355-fd2d-41f1-a3e4-8fc93d2b84c3] Running
	I0805 11:56:41.938277 2827274 system_pods.go:89] "storage-provisioner" [33a98b3e-aef3-4edc-8e99-b9ab8f1c70de] Running
	I0805 11:56:41.938282 2827274 system_pods.go:126] duration metric: took 203.440023ms to wait for k8s-apps to be running ...
	I0805 11:56:41.938289 2827274 system_svc.go:44] waiting for kubelet service to be running ....
	I0805 11:56:41.938353 2827274 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 11:56:41.950885 2827274 system_svc.go:56] duration metric: took 12.586836ms WaitForService to wait for kubelet
	I0805 11:56:41.950905 2827274 kubeadm.go:582] duration metric: took 2.98815192s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 11:56:41.950923 2827274 node_conditions.go:102] verifying NodePressure condition ...
	I0805 11:56:42.134783 2827274 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0805 11:56:42.134802 2827274 node_conditions.go:123] node cpu capacity is 2
	I0805 11:56:42.134812 2827274 node_conditions.go:105] duration metric: took 183.884619ms to run NodePressure ...
	I0805 11:56:42.134824 2827274 start.go:241] waiting for startup goroutines ...
	I0805 11:56:42.134831 2827274 start.go:246] waiting for cluster config update ...
	I0805 11:56:42.134841 2827274 start.go:255] writing updated cluster config ...
	I0805 11:56:42.135164 2827274 ssh_runner.go:195] Run: rm -f paused
	I0805 11:56:42.218675 2827274 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0805 11:56:42.221818 2827274 out.go:177] * Done! kubectl is now configured to use "functional-644345" cluster and "default" namespace by default
	
	
	==> Docker <==
	Aug 05 11:56:45 functional-644345 dockerd[7136]: time="2024-08-05T11:56:45.911162112Z" level=error msg="Not continuing with pull after error: errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n"
	Aug 05 11:56:45 functional-644345 dockerd[7136]: time="2024-08-05T11:56:45.911212311Z" level=info msg="Ignoring extra error returned from registry" error="unauthorized: authentication required"
	Aug 05 11:56:49 functional-644345 dockerd[7136]: time="2024-08-05T11:56:49.017721699Z" level=info msg="ignoring event" container=d981e4b23a741f24094830c29f19b8690f3633b1b7c3a4f6d6e3ea65e456712c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 05 11:56:52 functional-644345 cri-dockerd[7422]: time="2024-08-05T11:56:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c26d3ed09734bfd1cca82649d0f5b915e320bf99f2d8198dbef089fb46ba021c/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Aug 05 11:56:53 functional-644345 dockerd[7136]: time="2024-08-05T11:56:53.175594712Z" level=error msg="Not continuing with pull after error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 05 11:56:53 functional-644345 cri-dockerd[7422]: time="2024-08-05T11:56:53Z" level=info msg="Stop pulling image docker.io/nginx:alpine: alpine: Pulling from library/nginx"
	Aug 05 11:56:58 functional-644345 cri-dockerd[7422]: time="2024-08-05T11:56:58Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/786e607c149a43293cd9d9971c00ddc0a5cc2664c19106b3aeb5a5c905502d29/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Aug 05 11:56:58 functional-644345 dockerd[7136]: time="2024-08-05T11:56:58.748892868Z" level=error msg="Not continuing with pull after error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 05 11:56:58 functional-644345 cri-dockerd[7422]: time="2024-08-05T11:56:58Z" level=info msg="Stop pulling image docker.io/nginx:latest: latest: Pulling from library/nginx"
	Aug 05 11:57:07 functional-644345 dockerd[7136]: time="2024-08-05T11:57:07.342212701Z" level=error msg="Not continuing with pull after error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 05 11:57:07 functional-644345 dockerd[7136]: time="2024-08-05T11:57:07.345385151Z" level=error msg="Handler for POST /v1.43/images/create returned error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 05 11:57:13 functional-644345 dockerd[7136]: time="2024-08-05T11:57:13.334327968Z" level=error msg="Not continuing with pull after error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 05 11:57:13 functional-644345 dockerd[7136]: time="2024-08-05T11:57:13.336895622Z" level=error msg="Handler for POST /v1.43/images/create returned error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 05 11:57:35 functional-644345 dockerd[7136]: time="2024-08-05T11:57:35.314523217Z" level=error msg="Not continuing with pull after error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 05 11:57:35 functional-644345 dockerd[7136]: time="2024-08-05T11:57:35.317297410Z" level=error msg="Handler for POST /v1.43/images/create returned error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 05 11:57:42 functional-644345 dockerd[7136]: time="2024-08-05T11:57:42.332569488Z" level=error msg="Not continuing with pull after error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 05 11:57:42 functional-644345 dockerd[7136]: time="2024-08-05T11:57:42.335182779Z" level=error msg="Handler for POST /v1.43/images/create returned error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 05 11:58:26 functional-644345 dockerd[7136]: time="2024-08-05T11:58:26.332978142Z" level=error msg="Not continuing with pull after error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 05 11:58:26 functional-644345 dockerd[7136]: time="2024-08-05T11:58:26.336246608Z" level=error msg="Handler for POST /v1.43/images/create returned error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 05 11:58:30 functional-644345 dockerd[7136]: time="2024-08-05T11:58:30.327206450Z" level=error msg="Not continuing with pull after error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 05 11:58:30 functional-644345 dockerd[7136]: time="2024-08-05T11:58:30.329936106Z" level=error msg="Handler for POST /v1.43/images/create returned error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 05 11:59:49 functional-644345 dockerd[7136]: time="2024-08-05T11:59:49.411437841Z" level=error msg="Not continuing with pull after error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 05 11:59:49 functional-644345 cri-dockerd[7422]: time="2024-08-05T11:59:49Z" level=info msg="Stop pulling image docker.io/nginx:alpine: alpine: Pulling from library/nginx"
	Aug 05 11:59:58 functional-644345 dockerd[7136]: time="2024-08-05T11:59:58.326992472Z" level=error msg="Not continuing with pull after error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 05 11:59:58 functional-644345 dockerd[7136]: time="2024-08-05T11:59:58.329959820Z" level=error msg="Handler for POST /v1.43/images/create returned error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	21b72a56bd21e       2351f570ed0ea       3 minutes ago       Running             kube-proxy                3                   e07603d186008       kube-proxy-lgl7w
	45390a0d00675       ba04bb24b9575       3 minutes ago       Running             storage-provisioner       3                   10d8812558b5d       storage-provisioner
	cdb82ed6a9bf9       2437cf7621777       3 minutes ago       Running             coredns                   2                   763d56c6ca45a       coredns-7db6d8ff4d-rznxg
	575b592e9bffc       61773190d42ff       3 minutes ago       Running             kube-apiserver            0                   1044e89c68931       kube-apiserver-functional-644345
	3e15f18c20f6c       8e97cdb19e7cc       3 minutes ago       Running             kube-controller-manager   3                   74ae83d23c0ba       kube-controller-manager-functional-644345
	8c9d1dd88ef06       d48f992a22722       3 minutes ago       Running             kube-scheduler            3                   5e371c3deda9a       kube-scheduler-functional-644345
	0533e9debf009       014faa467e297       3 minutes ago       Running             etcd                      3                   df6a7e3709663       etcd-functional-644345
	19a735639c57b       014faa467e297       3 minutes ago       Exited              etcd                      2                   a8c0721274ad6       etcd-functional-644345
	32f524fefe4ad       8e97cdb19e7cc       3 minutes ago       Exited              kube-controller-manager   2                   272dc1fcdd9ec       kube-controller-manager-functional-644345
	c797bab538ca1       2351f570ed0ea       3 minutes ago       Exited              kube-proxy                2                   91df8803f2ef5       kube-proxy-lgl7w
	3e00484e3be06       d48f992a22722       3 minutes ago       Exited              kube-scheduler            2                   4502de65a9e15       kube-scheduler-functional-644345
	f3d08f679e92f       ba04bb24b9575       4 minutes ago       Exited              storage-provisioner       2                   b838b3c1260b9       storage-provisioner
	96b7b9f5153ee       2437cf7621777       4 minutes ago       Exited              coredns                   1                   120e374487113       coredns-7db6d8ff4d-rznxg
	
	
	==> coredns [96b7b9f5153e] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54057 - 3151 "HINFO IN 8660356540477454018.1651853706661492004. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.026590878s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [cdb82ed6a9bf] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:46394 - 53341 "HINFO IN 7099334815732562033.2827557498159571306. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021247939s
	
	
	==> describe nodes <==
	Name:               functional-644345
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-644345
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cfb202720123668c7435df1698a76741c3e0d87f
	                    minikube.k8s.io/name=functional-644345
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_05T11_54_20_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 11:54:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-644345
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 11:59:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 11:56:23 +0000   Mon, 05 Aug 2024 11:54:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 11:56:23 +0000   Mon, 05 Aug 2024 11:54:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 11:56:23 +0000   Mon, 05 Aug 2024 11:54:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 11:56:23 +0000   Mon, 05 Aug 2024 11:54:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-644345
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	System Info:
	  Machine ID:                 3e8a09581c064f1493bc60872b585519
	  System UUID:                70e367f6-896f-4fe9-a485-c8492974a937
	  Boot ID:                    055eef35-1ace-412e-809d-b7b68a43eb42
	  Kernel Version:             5.15.0-1066-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     nginx-svc                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m7s
	  default                     sp-pod                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m2s
	  kube-system                 coredns-7db6d8ff4d-rznxg                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     5m26s
	  kube-system                 etcd-functional-644345                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         5m39s
	  kube-system                 kube-apiserver-functional-644345             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m35s
	  kube-system                 kube-controller-manager-functional-644345    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m39s
	  kube-system                 kube-proxy-lgl7w                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m26s
	  kube-system                 kube-scheduler-functional-644345             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m39s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (2%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m24s                  kube-proxy       
	  Normal   Starting                 3m34s                  kube-proxy       
	  Normal   Starting                 4m21s                  kube-proxy       
	  Normal   Starting                 5m47s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  5m47s (x8 over 5m47s)  kubelet          Node functional-644345 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m47s (x8 over 5m47s)  kubelet          Node functional-644345 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m47s (x7 over 5m47s)  kubelet          Node functional-644345 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  5m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeNotReady             5m40s                  kubelet          Node functional-644345 status is now: NodeNotReady
	  Normal   Starting                 5m40s                  kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    5m40s                  kubelet          Node functional-644345 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m40s                  kubelet          Node functional-644345 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  5m40s                  kubelet          Node functional-644345 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  5m39s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeReady                5m39s                  kubelet          Node functional-644345 status is now: NodeReady
	  Normal   RegisteredNode           5m27s                  node-controller  Node functional-644345 event: Registered Node functional-644345 in Controller
	  Warning  ContainerGCFailed        4m40s                  kubelet          rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	  Normal   RegisteredNode           4m11s                  node-controller  Node functional-644345 event: Registered Node functional-644345 in Controller
	  Normal   Starting                 3m41s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  3m40s (x8 over 3m40s)  kubelet          Node functional-644345 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m40s (x8 over 3m40s)  kubelet          Node functional-644345 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m40s (x7 over 3m40s)  kubelet          Node functional-644345 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  3m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           3m23s                  node-controller  Node functional-644345 event: Registered Node functional-644345 in Controller
	
	
	==> dmesg <==
	[  +0.000642] FS-Cache: N-cookie c=000000ad [p=000000a4 fl=2 nc=0 na=1]
	[  +0.000845] FS-Cache: N-cookie d=000000005957fd62{9p.inode} n=00000000e51524aa
	[  +0.000975] FS-Cache: N-key=[8] '4b6d3b0000000000'
	[  +0.006808] FS-Cache: Duplicate cookie detected
	[  +0.000650] FS-Cache: O-cookie c=000000a7 [p=000000a4 fl=226 nc=0 na=1]
	[  +0.000921] FS-Cache: O-cookie d=000000005957fd62{9p.inode} n=000000009ca223dc
	[  +0.000977] FS-Cache: O-key=[8] '4b6d3b0000000000'
	[  +0.000649] FS-Cache: N-cookie c=000000ae [p=000000a4 fl=2 nc=0 na=1]
	[  +0.000898] FS-Cache: N-cookie d=000000005957fd62{9p.inode} n=000000008c1874a4
	[  +0.000963] FS-Cache: N-key=[8] '4b6d3b0000000000'
	[  +2.309208] FS-Cache: Duplicate cookie detected
	[  +0.000654] FS-Cache: O-cookie c=000000a5 [p=000000a4 fl=226 nc=0 na=1]
	[  +0.000938] FS-Cache: O-cookie d=000000005957fd62{9p.inode} n=00000000e3df5fa7
	[  +0.000991] FS-Cache: O-key=[8] '4a6d3b0000000000'
	[  +0.000657] FS-Cache: N-cookie c=000000b0 [p=000000a4 fl=2 nc=0 na=1]
	[  +0.000867] FS-Cache: N-cookie d=000000005957fd62{9p.inode} n=0000000013f782bc
	[  +0.000964] FS-Cache: N-key=[8] '4a6d3b0000000000'
	[  +0.334642] FS-Cache: Duplicate cookie detected
	[  +0.000744] FS-Cache: O-cookie c=000000aa [p=000000a4 fl=226 nc=0 na=1]
	[  +0.000899] FS-Cache: O-cookie d=000000005957fd62{9p.inode} n=0000000061ef2b15
	[  +0.000955] FS-Cache: O-key=[8] '506d3b0000000000'
	[  +0.000663] FS-Cache: N-cookie c=000000b1 [p=000000a4 fl=2 nc=0 na=1]
	[  +0.000856] FS-Cache: N-cookie d=000000005957fd62{9p.inode} n=00000000e51524aa
	[  +0.000959] FS-Cache: N-key=[8] '506d3b0000000000'
	[Aug 5 11:19] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [0533e9debf00] <==
	{"level":"info","ts":"2024-08-05T11:56:19.765271Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-05T11:56:19.765279Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-05T11:56:19.765492Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-08-05T11:56:19.765535Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-08-05T11:56:19.765611Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T11:56:19.765637Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T11:56:19.774492Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-05T11:56:19.774715Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-05T11:56:19.77474Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-05T11:56:19.774855Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-08-05T11:56:19.774862Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-08-05T11:56:21.342524Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2024-08-05T11:56:21.342826Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2024-08-05T11:56:21.342994Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-08-05T11:56:21.34316Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 4"}
	{"level":"info","ts":"2024-08-05T11:56:21.343265Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2024-08-05T11:56:21.34338Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 4"}
	{"level":"info","ts":"2024-08-05T11:56:21.343474Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 4"}
	{"level":"info","ts":"2024-08-05T11:56:21.345628Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-644345 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-05T11:56:21.345997Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T11:56:21.346005Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T11:56:21.348222Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-05T11:56:21.348262Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-05T11:56:21.348193Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-08-05T11:56:21.35975Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [19a735639c57] <==
	{"level":"info","ts":"2024-08-05T11:56:14.527365Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"2.48681ms"}
	{"level":"info","ts":"2024-08-05T11:56:14.554208Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-08-05T11:56:14.564004Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","commit-index":595}
	{"level":"info","ts":"2024-08-05T11:56:14.57137Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=()"}
	{"level":"info","ts":"2024-08-05T11:56:14.571603Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became follower at term 3"}
	{"level":"info","ts":"2024-08-05T11:56:14.571616Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft aec36adc501070cc [peers: [], term: 3, commit: 595, applied: 0, lastindex: 595, lastterm: 3]"}
	{"level":"warn","ts":"2024-08-05T11:56:14.572782Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-08-05T11:56:14.582875Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":565}
	{"level":"info","ts":"2024-08-05T11:56:14.586455Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-08-05T11:56:14.599548Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"aec36adc501070cc","timeout":"7s"}
	{"level":"info","ts":"2024-08-05T11:56:14.59989Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2024-08-05T11:56:14.599921Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"aec36adc501070cc","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-08-05T11:56:14.600146Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-08-05T11:56:14.600361Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-05T11:56:14.6004Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-05T11:56:14.60041Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-05T11:56:14.600625Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-08-05T11:56:14.600672Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-08-05T11:56:14.600751Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T11:56:14.600775Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T11:56:14.614342Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-05T11:56:14.615444Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-05T11:56:14.615485Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-05T11:56:14.615667Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-08-05T11:56:14.615676Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	
	
	==> kernel <==
	 11:59:59 up 19:42,  0 users,  load average: 0.48, 1.55, 2.04
	Linux functional-644345 5.15.0-1066-aws #72~20.04.1-Ubuntu SMP Sat Jul 20 07:44:07 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kube-apiserver [575b592e9bff] <==
	I0805 11:56:23.746613       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0805 11:56:23.751857       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0805 11:56:23.752140       1 policy_source.go:224] refreshing policies
	I0805 11:56:23.754077       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0805 11:56:23.812660       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0805 11:56:23.813245       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0805 11:56:23.814990       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0805 11:56:23.815019       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0805 11:56:23.819065       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0805 11:56:23.819340       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0805 11:56:23.819370       1 aggregator.go:165] initial CRD sync complete...
	I0805 11:56:23.819601       1 autoregister_controller.go:141] Starting autoregister controller
	I0805 11:56:23.819728       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0805 11:56:23.819828       1 cache.go:39] Caches are synced for autoregister controller
	E0805 11:56:23.821486       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0805 11:56:24.622593       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0805 11:56:25.249818       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0805 11:56:25.263472       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0805 11:56:25.317876       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0805 11:56:25.373571       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0805 11:56:25.382707       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0805 11:56:36.696541       1 controller.go:615] quota admission added evaluator for: endpoints
	I0805 11:56:36.746405       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0805 11:56:45.224417       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.102.190.197"}
	I0805 11:56:52.291210       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.111.198.96"}
	
	
	==> kube-controller-manager [32f524fefe4a] <==
	
	
	==> kube-controller-manager [3e15f18c20f6] <==
	I0805 11:56:36.440920       1 shared_informer.go:320] Caches are synced for GC
	I0805 11:56:36.447489       1 shared_informer.go:320] Caches are synced for TTL
	I0805 11:56:36.449766       1 shared_informer.go:320] Caches are synced for namespace
	I0805 11:56:36.449894       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0805 11:56:36.451557       1 shared_informer.go:320] Caches are synced for PV protection
	I0805 11:56:36.454915       1 shared_informer.go:320] Caches are synced for ephemeral
	I0805 11:56:36.460118       1 shared_informer.go:320] Caches are synced for node
	I0805 11:56:36.460236       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0805 11:56:36.460293       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0805 11:56:36.460304       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0805 11:56:36.460311       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0805 11:56:36.463780       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0805 11:56:36.470133       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0805 11:56:36.470331       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="136.311µs"
	I0805 11:56:36.479572       1 shared_informer.go:320] Caches are synced for stateful set
	I0805 11:56:36.520827       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0805 11:56:36.538957       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0805 11:56:36.554302       1 shared_informer.go:320] Caches are synced for resource quota
	I0805 11:56:36.592042       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0805 11:56:36.593284       1 shared_informer.go:320] Caches are synced for resource quota
	I0805 11:56:36.609846       1 shared_informer.go:320] Caches are synced for disruption
	I0805 11:56:36.632188       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0805 11:56:37.061380       1 shared_informer.go:320] Caches are synced for garbage collector
	I0805 11:56:37.061439       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0805 11:56:37.089869       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-proxy [21b72a56bd21] <==
	I0805 11:56:24.894839       1 server_linux.go:69] "Using iptables proxy"
	I0805 11:56:24.934457       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0805 11:56:24.986052       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0805 11:56:24.986337       1 server_linux.go:165] "Using iptables Proxier"
	I0805 11:56:24.988578       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0805 11:56:24.988730       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0805 11:56:24.988845       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0805 11:56:24.989179       1 server.go:872] "Version info" version="v1.30.3"
	I0805 11:56:24.989512       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 11:56:24.990575       1 config.go:192] "Starting service config controller"
	I0805 11:56:24.990956       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0805 11:56:24.991136       1 config.go:101] "Starting endpoint slice config controller"
	I0805 11:56:24.991219       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0805 11:56:24.992771       1 config.go:319] "Starting node config controller"
	I0805 11:56:24.992791       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0805 11:56:25.091723       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0805 11:56:25.091795       1 shared_informer.go:320] Caches are synced for service config
	I0805 11:56:25.094591       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [c797bab538ca] <==
	I0805 11:56:14.727627       1 server_linux.go:69] "Using iptables proxy"
	E0805 11:56:14.738157       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-644345\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-scheduler [3e00484e3be0] <==
	
	
	==> kube-scheduler [8c9d1dd88ef0] <==
	I0805 11:56:21.391514       1 serving.go:380] Generated self-signed cert in-memory
	W0805 11:56:23.722123       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0805 11:56:23.722227       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0805 11:56:23.722258       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0805 11:56:23.722306       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0805 11:56:23.758160       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0805 11:56:23.758197       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 11:56:23.760406       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0805 11:56:23.761092       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0805 11:56:23.771365       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0805 11:56:23.771410       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0805 11:56:23.871850       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 05 11:58:26 functional-644345 kubelet[8900]: E0805 11:58:26.344499    8900 kuberuntime_manager.go:1256] container &Container{Name:nginx,Image:docker.io/nginx:alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sfhr2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nginx-svc_default(e3c3985a-4f4d
-4dad-a476-587a0ab830e7): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Aug 05 11:58:26 functional-644345 kubelet[8900]: E0805 11:58:26.344919    8900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="e3c3985a-4f4d-4dad-a476-587a0ab830e7"
	Aug 05 11:58:30 functional-644345 kubelet[8900]: E0805 11:58:30.330633    8900 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Aug 05 11:58:30 functional-644345 kubelet[8900]: E0805 11:58:30.330691    8900 kuberuntime_image.go:55] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Aug 05 11:58:30 functional-644345 kubelet[8900]: E0805 11:58:30.330792    8900 kuberuntime_manager.go:1256] container &Container{Name:myfrontend,Image:docker.io/nginx,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mypd,ReadOnly:false,MountPath:/tmp/mount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4js6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start
failed in pod sp-pod_default(7916faf8-6ac9-46d3-aed5-006a182fd8d7): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Aug 05 11:58:30 functional-644345 kubelet[8900]: E0805 11:58:30.330823    8900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="7916faf8-6ac9-46d3-aed5-006a182fd8d7"
	Aug 05 11:58:39 functional-644345 kubelet[8900]: E0805 11:58:39.085490    8900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx-svc" podUID="e3c3985a-4f4d-4dad-a476-587a0ab830e7"
	Aug 05 11:58:42 functional-644345 kubelet[8900]: E0805 11:58:42.061728    8900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="7916faf8-6ac9-46d3-aed5-006a182fd8d7"
	Aug 05 11:58:51 functional-644345 kubelet[8900]: E0805 11:58:51.062149    8900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx-svc" podUID="e3c3985a-4f4d-4dad-a476-587a0ab830e7"
	Aug 05 11:58:54 functional-644345 kubelet[8900]: E0805 11:58:54.061934    8900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="7916faf8-6ac9-46d3-aed5-006a182fd8d7"
	Aug 05 11:59:05 functional-644345 kubelet[8900]: E0805 11:59:05.063723    8900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx-svc" podUID="e3c3985a-4f4d-4dad-a476-587a0ab830e7"
	Aug 05 11:59:06 functional-644345 kubelet[8900]: E0805 11:59:06.061663    8900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="7916faf8-6ac9-46d3-aed5-006a182fd8d7"
	Aug 05 11:59:19 functional-644345 kubelet[8900]: E0805 11:59:19.062828    8900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="7916faf8-6ac9-46d3-aed5-006a182fd8d7"
	Aug 05 11:59:20 functional-644345 kubelet[8900]: E0805 11:59:20.062000    8900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx-svc" podUID="e3c3985a-4f4d-4dad-a476-587a0ab830e7"
	Aug 05 11:59:32 functional-644345 kubelet[8900]: E0805 11:59:32.061177    8900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="7916faf8-6ac9-46d3-aed5-006a182fd8d7"
	Aug 05 11:59:35 functional-644345 kubelet[8900]: E0805 11:59:35.063403    8900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx-svc" podUID="e3c3985a-4f4d-4dad-a476-587a0ab830e7"
	Aug 05 11:59:43 functional-644345 kubelet[8900]: E0805 11:59:43.062281    8900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="7916faf8-6ac9-46d3-aed5-006a182fd8d7"
	Aug 05 11:59:49 functional-644345 kubelet[8900]: E0805 11:59:49.414757    8900 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Aug 05 11:59:49 functional-644345 kubelet[8900]: E0805 11:59:49.414811    8900 kuberuntime_image.go:55] "Failed to pull image" err="toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Aug 05 11:59:49 functional-644345 kubelet[8900]: E0805 11:59:49.414897    8900 kuberuntime_manager.go:1256] container &Container{Name:nginx,Image:docker.io/nginx:alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sfhr2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nginx-svc_default(e3c3985a-4f4d
-4dad-a476-587a0ab830e7): ErrImagePull: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Aug 05 11:59:49 functional-644345 kubelet[8900]: E0805 11:59:49.414928    8900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="e3c3985a-4f4d-4dad-a476-587a0ab830e7"
	Aug 05 11:59:58 functional-644345 kubelet[8900]: E0805 11:59:58.330560    8900 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Aug 05 11:59:58 functional-644345 kubelet[8900]: E0805 11:59:58.330647    8900 kuberuntime_image.go:55] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Aug 05 11:59:58 functional-644345 kubelet[8900]: E0805 11:59:58.331006    8900 kuberuntime_manager.go:1256] container &Container{Name:myfrontend,Image:docker.io/nginx,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mypd,ReadOnly:false,MountPath:/tmp/mount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4js6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start
failed in pod sp-pod_default(7916faf8-6ac9-46d3-aed5-006a182fd8d7): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Aug 05 11:59:58 functional-644345 kubelet[8900]: E0805 11:59:58.331039    8900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="7916faf8-6ac9-46d3-aed5-006a182fd8d7"
	
	
	==> storage-provisioner [45390a0d0067] <==
	I0805 11:56:24.960249       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0805 11:56:24.977712       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0805 11:56:24.978470       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0805 11:56:42.384427       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0805 11:56:42.384824       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-644345_8b944752-9df4-4fd9-9411-17b1358aee8d!
	I0805 11:56:42.386277       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d94ea9c4-0626-49b7-9ee8-92bd8a1db863", APIVersion:"v1", ResourceVersion:"649", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-644345_8b944752-9df4-4fd9-9411-17b1358aee8d became leader
	I0805 11:56:42.485884       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-644345_8b944752-9df4-4fd9-9411-17b1358aee8d!
	I0805 11:56:57.617633       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0805 11:56:57.622724       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    4ec02f3a-9b01-4dce-a8f9-defe18b5ab8d 384 0 2024-08-05 11:54:34 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-08-05 11:54:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-1e7089d2-3636-4c60-aeaa-1c1c7dcd8f5b &PersistentVolumeClaim{ObjectMeta:{myclaim  default  1e7089d2-3636-4c60-aeaa-1c1c7dcd8f5b 714 0 2024-08-05 11:56:57 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-08-05 11:56:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-08-05 11:56:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0805 11:56:57.623242       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"1e7089d2-3636-4c60-aeaa-1c1c7dcd8f5b", APIVersion:"v1", ResourceVersion:"714", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0805 11:56:57.628342       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-1e7089d2-3636-4c60-aeaa-1c1c7dcd8f5b" provisioned
	I0805 11:56:57.632774       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0805 11:56:57.632792       1 volume_store.go:212] Trying to save persistentvolume "pvc-1e7089d2-3636-4c60-aeaa-1c1c7dcd8f5b"
	I0805 11:56:57.657364       1 volume_store.go:219] persistentvolume "pvc-1e7089d2-3636-4c60-aeaa-1c1c7dcd8f5b" saved
	I0805 11:56:57.671887       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"1e7089d2-3636-4c60-aeaa-1c1c7dcd8f5b", APIVersion:"v1", ResourceVersion:"714", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-1e7089d2-3636-4c60-aeaa-1c1c7dcd8f5b
	
	
	==> storage-provisioner [f3d08f679e92] <==
	I0805 11:55:45.974708       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0805 11:55:45.991107       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0805 11:55:45.991160       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-644345 -n functional-644345
helpers_test.go:254: (dbg) Done: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-644345 -n functional-644345: (1.298116031s)
helpers_test.go:261: (dbg) Run:  kubectl --context functional-644345 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: nginx-svc sp-pod
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-644345 describe pod nginx-svc sp-pod
helpers_test.go:282: (dbg) kubectl --context functional-644345 describe pod nginx-svc sp-pod:

                                                
                                                
-- stdout --
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-644345/192.168.49.2
	Start Time:       Mon, 05 Aug 2024 11:56:52 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sfhr2 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-sfhr2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  3m9s                 default-scheduler  Successfully assigned default/nginx-svc to functional-644345
	  Warning  Failed     3m8s                 kubelet            Failed to pull image "docker.io/nginx:alpine": toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    95s (x4 over 3m9s)   kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     95s (x4 over 3m8s)   kubelet            Error: ErrImagePull
	  Warning  Failed     95s (x3 over 2m54s)  kubelet            Failed to pull image "docker.io/nginx:alpine": Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     82s (x6 over 3m8s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    70s (x7 over 3m8s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-644345/192.168.49.2
	Start Time:       Mon, 05 Aug 2024 11:56:57 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4js6j (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-4js6j:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  3m4s                 default-scheduler  Successfully assigned default/sp-pod to functional-644345
	  Warning  Failed     3m3s                 kubelet            Failed to pull image "docker.io/nginx": toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    91s (x4 over 3m3s)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     91s (x4 over 3m3s)   kubelet            Error: ErrImagePull
	  Warning  Failed     91s (x3 over 2m48s)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     79s (x6 over 3m2s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    67s (x7 over 3m2s)   kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (189.54s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (241.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-644345 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [e3c3985a-4f4d-4dad-a476-587a0ab830e7] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:329: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: WARNING: pod list for "default" "run=nginx-svc" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_tunnel_test.go:216: ***** TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: pod "run=nginx-svc" failed to start within 4m0s: context deadline exceeded ****
functional_test_tunnel_test.go:216: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-644345 -n functional-644345
functional_test_tunnel_test.go:216: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: showing logs for failed pods as of 2024-08-05 12:00:52.708434622 +0000 UTC m=+884.938944686
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-644345 describe po nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) kubectl --context functional-644345 describe po nginx-svc -n default:
Name:             nginx-svc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-644345/192.168.49.2
Start Time:       Mon, 05 Aug 2024 11:56:52 +0000
Labels:           run=nginx-svc
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:  10.244.0.8
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sfhr2 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-sfhr2:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  4m                     default-scheduler  Successfully assigned default/nginx-svc to functional-644345
Warning  Failed     3m59s                  kubelet            Failed to pull image "docker.io/nginx:alpine": toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   Pulling    2m26s (x4 over 4m)     kubelet            Pulling image "docker.io/nginx:alpine"
Warning  Failed     2m26s (x4 over 3m59s)  kubelet            Error: ErrImagePull
Warning  Failed     2m26s (x3 over 3m45s)  kubelet            Failed to pull image "docker.io/nginx:alpine": Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     2m13s (x6 over 3m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    2m1s (x7 over 3m59s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-644345 logs nginx-svc -n default
E0805 12:00:52.879460 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/addons-245337/client.crt: no such file or directory
functional_test_tunnel_test.go:216: (dbg) Non-zero exit: kubectl --context functional-644345 logs nginx-svc -n default: exit status 1 (113.703312ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx-svc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:216: kubectl --context functional-644345 logs nginx-svc -n default: exit status 1
functional_test_tunnel_test.go:217: wait: run=nginx-svc within 4m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (241.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (116s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-644345 get svc nginx-svc
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
NAME        TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)        AGE
nginx-svc   LoadBalancer   10.111.198.96   10.111.198.96   80:32310/TCP   5m56s
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (116.00s)

                                                
                                    

Test pass (320/350)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8.24
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.30.3/json-events 6.73
13 TestDownloadOnly/v1.30.3/preload-exists 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.08
18 TestDownloadOnly/v1.30.3/DeleteAll 0.2
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.31.0-rc.0/json-events 6.07
22 TestDownloadOnly/v1.31.0-rc.0/preload-exists 0
26 TestDownloadOnly/v1.31.0-rc.0/LogsDuration 0.07
27 TestDownloadOnly/v1.31.0-rc.0/DeleteAll 0.19
28 TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds 0.17
30 TestBinaryMirror 0.6
31 TestOffline 100
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.1
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.12
36 TestAddons/Setup 233.41
38 TestAddons/serial/Volcano 39.98
40 TestAddons/serial/GCPAuth/Namespaces 0.23
42 TestAddons/parallel/Registry 17.02
43 TestAddons/parallel/Ingress 21.11
44 TestAddons/parallel/InspektorGadget 11.81
45 TestAddons/parallel/MetricsServer 5.78
48 TestAddons/parallel/CSI 46.29
49 TestAddons/parallel/Headlamp 19.02
50 TestAddons/parallel/CloudSpanner 6.63
51 TestAddons/parallel/LocalPath 9.53
52 TestAddons/parallel/NvidiaDevicePlugin 5.89
53 TestAddons/parallel/Yakd 11.68
54 TestAddons/StoppedEnableDisable 11.4
55 TestCertOptions 41.21
56 TestCertExpiration 247.46
57 TestDockerFlags 47.59
58 TestForceSystemdFlag 42.11
59 TestForceSystemdEnv 44.19
65 TestErrorSpam/setup 31.28
66 TestErrorSpam/start 0.74
67 TestErrorSpam/status 1
68 TestErrorSpam/pause 1.3
69 TestErrorSpam/unpause 1.38
70 TestErrorSpam/stop 11.04
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 87.28
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 35.47
77 TestFunctional/serial/KubeContext 0.06
78 TestFunctional/serial/KubectlGetPods 0.1
81 TestFunctional/serial/CacheCmd/cache/add_remote 3.32
82 TestFunctional/serial/CacheCmd/cache/add_local 1
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
84 TestFunctional/serial/CacheCmd/cache/list 0.05
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
86 TestFunctional/serial/CacheCmd/cache/cache_reload 1.64
87 TestFunctional/serial/CacheCmd/cache/delete 0.12
88 TestFunctional/serial/MinikubeKubectlCmd 0.14
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
90 TestFunctional/serial/ExtraConfig 44.6
91 TestFunctional/serial/ComponentHealth 0.11
92 TestFunctional/serial/LogsCmd 1.27
93 TestFunctional/serial/LogsFileCmd 1.24
94 TestFunctional/serial/InvalidService 5.09
96 TestFunctional/parallel/ConfigCmd 0.47
97 TestFunctional/parallel/DashboardCmd 10.86
98 TestFunctional/parallel/DryRun 0.44
99 TestFunctional/parallel/InternationalLanguage 0.18
100 TestFunctional/parallel/StatusCmd 1.06
104 TestFunctional/parallel/ServiceCmdConnect 11.65
105 TestFunctional/parallel/AddonsCmd 0.24
108 TestFunctional/parallel/SSHCmd 0.69
109 TestFunctional/parallel/CpCmd 2.39
111 TestFunctional/parallel/FileSync 0.27
112 TestFunctional/parallel/CertSync 1.71
116 TestFunctional/parallel/NodeLabels 0.12
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.27
120 TestFunctional/parallel/License 0.23
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.74
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
126 TestFunctional/parallel/ServiceCmd/DeployApp 7.21
127 TestFunctional/parallel/ServiceCmd/List 0.55
128 TestFunctional/parallel/ServiceCmd/JSONOutput 0.51
129 TestFunctional/parallel/ServiceCmd/HTTPS 0.4
130 TestFunctional/parallel/ServiceCmd/Format 0.38
131 TestFunctional/parallel/ServiceCmd/URL 0.37
132 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
133 TestFunctional/parallel/ProfileCmd/profile_list 0.4
134 TestFunctional/parallel/ProfileCmd/profile_json_output 0.39
135 TestFunctional/parallel/MountCmd/any-port 6.24
136 TestFunctional/parallel/MountCmd/specific-port 1.66
137 TestFunctional/parallel/MountCmd/VerifyCleanup 2.05
138 TestFunctional/parallel/Version/short 0.05
139 TestFunctional/parallel/Version/components 0.93
140 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
141 TestFunctional/parallel/ImageCommands/ImageListTable 0.31
142 TestFunctional/parallel/ImageCommands/ImageListJson 0.48
143 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
144 TestFunctional/parallel/ImageCommands/ImageBuild 2.35
145 TestFunctional/parallel/ImageCommands/Setup 0.77
146 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.96
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.83
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.08
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.33
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.45
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.68
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.48
154 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
155 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.16
156 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
157 TestFunctional/parallel/DockerEnv/bash 1.04
161 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
162 TestFunctional/delete_echo-server_images 0.05
163 TestFunctional/delete_my-image_image 0.02
164 TestFunctional/delete_minikube_cached_images 0.01
168 TestMultiControlPlane/serial/StartCluster 134.36
169 TestMultiControlPlane/serial/DeployApp 83.16
170 TestMultiControlPlane/serial/PingHostFromPods 1.76
171 TestMultiControlPlane/serial/AddWorkerNode 27.89
172 TestMultiControlPlane/serial/NodeLabels 0.12
173 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.77
174 TestMultiControlPlane/serial/CopyFile 20.07
175 TestMultiControlPlane/serial/StopSecondaryNode 11.81
176 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.57
177 TestMultiControlPlane/serial/RestartSecondaryNode 31.57
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 16.49
179 TestMultiControlPlane/serial/RestartClusterKeepsNodes 169.64
180 TestMultiControlPlane/serial/DeleteSecondaryNode 12.3
181 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.55
182 TestMultiControlPlane/serial/StopCluster 32.8
183 TestMultiControlPlane/serial/RestartCluster 105.32
184 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.57
185 TestMultiControlPlane/serial/AddSecondaryNode 42.87
186 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.88
189 TestImageBuild/serial/Setup 32.46
190 TestImageBuild/serial/NormalBuild 1.83
191 TestImageBuild/serial/BuildWithBuildArg 0.89
192 TestImageBuild/serial/BuildWithDockerIgnore 0.7
193 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.71
197 TestJSONOutput/start/Command 56.8
198 TestJSONOutput/start/Audit 0
200 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/pause/Command 0.63
204 TestJSONOutput/pause/Audit 0
206 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
209 TestJSONOutput/unpause/Command 0.54
210 TestJSONOutput/unpause/Audit 0
212 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
213 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
215 TestJSONOutput/stop/Command 5.81
216 TestJSONOutput/stop/Audit 0
218 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
219 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
220 TestErrorJSONOutput 0.22
222 TestKicCustomNetwork/create_custom_network 35.64
223 TestKicCustomNetwork/use_default_bridge_network 34.88
224 TestKicExistingNetwork 34.55
225 TestKicCustomSubnet 35.39
226 TestKicStaticIP 32.37
227 TestMainNoArgs 0.05
228 TestMinikubeProfile 69.45
231 TestMountStart/serial/StartWithMountFirst 8.24
232 TestMountStart/serial/VerifyMountFirst 0.26
233 TestMountStart/serial/StartWithMountSecond 8.64
234 TestMountStart/serial/VerifyMountSecond 0.26
235 TestMountStart/serial/DeleteFirst 1.46
236 TestMountStart/serial/VerifyMountPostDelete 0.26
237 TestMountStart/serial/Stop 1.21
238 TestMountStart/serial/RestartStopped 8.36
239 TestMountStart/serial/VerifyMountPostStop 0.25
242 TestMultiNode/serial/FreshStart2Nodes 77.63
243 TestMultiNode/serial/DeployApp2Nodes 36.99
244 TestMultiNode/serial/PingHostFrom2Pods 1.02
245 TestMultiNode/serial/AddNode 19.17
246 TestMultiNode/serial/MultiNodeLabels 0.11
247 TestMultiNode/serial/ProfileList 0.38
248 TestMultiNode/serial/CopyFile 10.47
249 TestMultiNode/serial/StopNode 2.28
250 TestMultiNode/serial/StartAfterStop 11.56
251 TestMultiNode/serial/RestartKeepsNodes 69.72
252 TestMultiNode/serial/DeleteNode 5.79
253 TestMultiNode/serial/StopMultiNode 21.61
254 TestMultiNode/serial/RestartMultiNode 59.62
255 TestMultiNode/serial/ValidateNameConflict 34
260 TestPreload 148.45
262 TestScheduledStopUnix 106.35
263 TestSkaffold 120.05
265 TestInsufficientStorage 11.47
266 TestRunningBinaryUpgrade 107.93
268 TestKubernetesUpgrade 378.98
269 TestMissingContainerUpgrade 144.76
271 TestPause/serial/Start 58.04
272 TestPause/serial/SecondStartNoReconfiguration 29.68
273 TestPause/serial/Pause 0.64
274 TestPause/serial/VerifyStatus 0.33
275 TestPause/serial/Unpause 0.54
276 TestPause/serial/PauseAgain 0.88
277 TestPause/serial/DeletePaused 2.2
278 TestPause/serial/VerifyDeletedResources 0.38
280 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
281 TestNoKubernetes/serial/StartWithK8s 42.45
293 TestNoKubernetes/serial/StartWithStopK8s 21.67
294 TestNoKubernetes/serial/Start 9.04
295 TestNoKubernetes/serial/VerifyK8sNotRunning 0.32
296 TestNoKubernetes/serial/ProfileList 1.1
297 TestNoKubernetes/serial/Stop 1.29
298 TestNoKubernetes/serial/StartNoArgs 8.19
299 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.45
300 TestStoppedBinaryUpgrade/Setup 0.66
301 TestStoppedBinaryUpgrade/Upgrade 96.01
309 TestNetworkPlugins/group/auto/Start 59.21
310 TestStoppedBinaryUpgrade/MinikubeLogs 1.58
311 TestNetworkPlugins/group/kindnet/Start 87
312 TestNetworkPlugins/group/auto/KubeletFlags 0.39
313 TestNetworkPlugins/group/auto/NetCatPod 12.39
314 TestNetworkPlugins/group/auto/DNS 0.31
315 TestNetworkPlugins/group/auto/Localhost 0.21
316 TestNetworkPlugins/group/auto/HairPin 0.22
317 TestNetworkPlugins/group/calico/Start 82.44
318 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
319 TestNetworkPlugins/group/kindnet/KubeletFlags 0.47
320 TestNetworkPlugins/group/kindnet/NetCatPod 10.49
321 TestNetworkPlugins/group/kindnet/DNS 0.51
322 TestNetworkPlugins/group/kindnet/Localhost 0.3
323 TestNetworkPlugins/group/kindnet/HairPin 0.25
324 TestNetworkPlugins/group/calico/ControllerPod 6.01
325 TestNetworkPlugins/group/custom-flannel/Start 73.75
326 TestNetworkPlugins/group/calico/KubeletFlags 0.43
327 TestNetworkPlugins/group/calico/NetCatPod 15.45
328 TestNetworkPlugins/group/calico/DNS 0.31
329 TestNetworkPlugins/group/calico/Localhost 0.21
330 TestNetworkPlugins/group/calico/HairPin 0.25
331 TestNetworkPlugins/group/false/Start 58.65
332 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.44
333 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.44
334 TestNetworkPlugins/group/custom-flannel/DNS 0.2
335 TestNetworkPlugins/group/custom-flannel/Localhost 0.21
336 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
337 TestNetworkPlugins/group/false/KubeletFlags 0.37
338 TestNetworkPlugins/group/false/NetCatPod 12.4
339 TestNetworkPlugins/group/enable-default-cni/Start 94.18
340 TestNetworkPlugins/group/false/DNS 0.3
341 TestNetworkPlugins/group/false/Localhost 0.33
342 TestNetworkPlugins/group/false/HairPin 0.23
343 TestNetworkPlugins/group/flannel/Start 68.41
344 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.43
345 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.39
346 TestNetworkPlugins/group/flannel/ControllerPod 6.01
347 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
348 TestNetworkPlugins/group/flannel/NetCatPod 9.27
349 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
350 TestNetworkPlugins/group/enable-default-cni/Localhost 0.2
351 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
352 TestNetworkPlugins/group/flannel/DNS 0.26
353 TestNetworkPlugins/group/flannel/Localhost 0.26
354 TestNetworkPlugins/group/flannel/HairPin 0.24
355 TestNetworkPlugins/group/bridge/Start 58.78
356 TestNetworkPlugins/group/kubenet/Start 54.06
357 TestNetworkPlugins/group/bridge/KubeletFlags 0.36
358 TestNetworkPlugins/group/bridge/NetCatPod 11.37
359 TestNetworkPlugins/group/kubenet/KubeletFlags 0.31
360 TestNetworkPlugins/group/kubenet/NetCatPod 10.3
361 TestNetworkPlugins/group/bridge/DNS 0.54
362 TestNetworkPlugins/group/bridge/Localhost 0.17
363 TestNetworkPlugins/group/bridge/HairPin 0.17
364 TestNetworkPlugins/group/kubenet/DNS 0.29
365 TestNetworkPlugins/group/kubenet/Localhost 0.27
366 TestNetworkPlugins/group/kubenet/HairPin 0.24
368 TestStartStop/group/old-k8s-version/serial/FirstStart 158.01
370 TestStartStop/group/no-preload/serial/FirstStart 91.49
371 TestStartStop/group/no-preload/serial/DeployApp 8.42
372 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.17
373 TestStartStop/group/no-preload/serial/Stop 11.03
374 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
375 TestStartStop/group/no-preload/serial/SecondStart 292.55
376 TestStartStop/group/old-k8s-version/serial/DeployApp 9.58
377 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.21
378 TestStartStop/group/old-k8s-version/serial/Stop 11.02
379 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
380 TestStartStop/group/old-k8s-version/serial/SecondStart 122.53
381 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
382 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
383 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
384 TestStartStop/group/old-k8s-version/serial/Pause 2.8
386 TestStartStop/group/embed-certs/serial/FirstStart 48.48
387 TestStartStop/group/embed-certs/serial/DeployApp 9.37
388 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.08
389 TestStartStop/group/embed-certs/serial/Stop 10.93
390 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
391 TestStartStop/group/embed-certs/serial/SecondStart 269.27
392 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
393 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.23
394 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.86
395 TestStartStop/group/no-preload/serial/Pause 2.89
397 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 85.39
398 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.39
399 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.14
400 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.21
401 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
402 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 289.62
403 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
404 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.12
405 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
406 TestStartStop/group/embed-certs/serial/Pause 2.94
408 TestStartStop/group/newest-cni/serial/FirstStart 40.97
409 TestStartStop/group/newest-cni/serial/DeployApp 0
410 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.2
411 TestStartStop/group/newest-cni/serial/Stop 5.8
412 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
413 TestStartStop/group/newest-cni/serial/SecondStart 20.28
414 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
415 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
416 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.83
417 TestStartStop/group/newest-cni/serial/Pause 3.64
418 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
419 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
420 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
421 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.83
x
+
TestDownloadOnly/v1.20.0/json-events (8.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-467372 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-467372 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (8.239104486s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-467372
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-467372: exit status 85 (70.714667ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-467372 | jenkins | v1.33.1 | 05 Aug 24 11:46 UTC |          |
	|         | -p download-only-467372        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 11:46:07
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 11:46:07.855024 2795238 out.go:291] Setting OutFile to fd 1 ...
	I0805 11:46:07.855153 2795238 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 11:46:07.855164 2795238 out.go:304] Setting ErrFile to fd 2...
	I0805 11:46:07.855169 2795238 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 11:46:07.855417 2795238 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-2789855/.minikube/bin
	W0805 11:46:07.855561 2795238 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19377-2789855/.minikube/config/config.json: open /home/jenkins/minikube-integration/19377-2789855/.minikube/config/config.json: no such file or directory
	I0805 11:46:07.855967 2795238 out.go:298] Setting JSON to true
	I0805 11:46:07.856834 2795238 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":70119,"bootTime":1722788249,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0805 11:46:07.856903 2795238 start.go:139] virtualization:  
	I0805 11:46:07.859631 2795238 out.go:97] [download-only-467372] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	W0805 11:46:07.859776 2795238 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19377-2789855/.minikube/cache/preloaded-tarball: no such file or directory
	I0805 11:46:07.859815 2795238 notify.go:220] Checking for updates...
	I0805 11:46:07.861701 2795238 out.go:169] MINIKUBE_LOCATION=19377
	I0805 11:46:07.863427 2795238 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 11:46:07.865118 2795238 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19377-2789855/kubeconfig
	I0805 11:46:07.866950 2795238 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19377-2789855/.minikube
	I0805 11:46:07.868665 2795238 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0805 11:46:07.871938 2795238 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0805 11:46:07.872226 2795238 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 11:46:07.894353 2795238 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0805 11:46:07.894464 2795238 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0805 11:46:07.958557 2795238 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:51 SystemTime:2024-08-05 11:46:07.948486371 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0805 11:46:07.958697 2795238 docker.go:307] overlay module found
	I0805 11:46:07.960634 2795238 out.go:97] Using the docker driver based on user configuration
	I0805 11:46:07.960660 2795238 start.go:297] selected driver: docker
	I0805 11:46:07.960666 2795238 start.go:901] validating driver "docker" against <nil>
	I0805 11:46:07.960778 2795238 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0805 11:46:08.019697 2795238 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:51 SystemTime:2024-08-05 11:46:08.008376195 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0805 11:46:08.019868 2795238 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 11:46:08.020167 2795238 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0805 11:46:08.020368 2795238 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0805 11:46:08.022734 2795238 out.go:169] Using Docker driver with root privileges
	I0805 11:46:08.024383 2795238 cni.go:84] Creating CNI manager for ""
	I0805 11:46:08.024427 2795238 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0805 11:46:08.024532 2795238 start.go:340] cluster config:
	{Name:download-only-467372 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-467372 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 11:46:08.026252 2795238 out.go:97] Starting "download-only-467372" primary control-plane node in "download-only-467372" cluster
	I0805 11:46:08.026288 2795238 cache.go:121] Beginning downloading kic base image for docker with docker
	I0805 11:46:08.028370 2795238 out.go:97] Pulling base image v0.0.44-1721902582-19326 ...
	I0805 11:46:08.028417 2795238 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0805 11:46:08.028507 2795238 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0805 11:46:08.044376 2795238 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0805 11:46:08.045125 2795238 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0805 11:46:08.045241 2795238 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0805 11:46:08.094210 2795238 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0805 11:46:08.094239 2795238 cache.go:56] Caching tarball of preloaded images
	I0805 11:46:08.094395 2795238 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0805 11:46:08.096756 2795238 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0805 11:46:08.096780 2795238 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0805 11:46:08.206934 2795238 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /home/jenkins/minikube-integration/19377-2789855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-467372 host does not exist
	  To start a cluster, run: "minikube start -p download-only-467372"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-467372
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (6.73s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-622683 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-622683 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=docker  --container-runtime=docker: (6.727002013s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (6.73s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-622683
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-622683: exit status 85 (78.951884ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-467372 | jenkins | v1.33.1 | 05 Aug 24 11:46 UTC |                     |
	|         | -p download-only-467372        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 05 Aug 24 11:46 UTC | 05 Aug 24 11:46 UTC |
	| delete  | -p download-only-467372        | download-only-467372 | jenkins | v1.33.1 | 05 Aug 24 11:46 UTC | 05 Aug 24 11:46 UTC |
	| start   | -o=json --download-only        | download-only-622683 | jenkins | v1.33.1 | 05 Aug 24 11:46 UTC |                     |
	|         | -p download-only-622683        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 11:46:16
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 11:46:16.488958 2795442 out.go:291] Setting OutFile to fd 1 ...
	I0805 11:46:16.489193 2795442 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 11:46:16.489221 2795442 out.go:304] Setting ErrFile to fd 2...
	I0805 11:46:16.489239 2795442 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 11:46:16.489528 2795442 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-2789855/.minikube/bin
	I0805 11:46:16.490004 2795442 out.go:298] Setting JSON to true
	I0805 11:46:16.490934 2795442 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":70128,"bootTime":1722788249,"procs":150,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0805 11:46:16.491032 2795442 start.go:139] virtualization:  
	I0805 11:46:16.493192 2795442 out.go:97] [download-only-622683] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0805 11:46:16.493491 2795442 notify.go:220] Checking for updates...
	I0805 11:46:16.495853 2795442 out.go:169] MINIKUBE_LOCATION=19377
	I0805 11:46:16.497649 2795442 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 11:46:16.499413 2795442 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19377-2789855/kubeconfig
	I0805 11:46:16.500855 2795442 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19377-2789855/.minikube
	I0805 11:46:16.502395 2795442 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0805 11:46:16.505290 2795442 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0805 11:46:16.505570 2795442 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 11:46:16.526222 2795442 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0805 11:46:16.526342 2795442 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0805 11:46:16.596543 2795442 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-05 11:46:16.586348957 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0805 11:46:16.596656 2795442 docker.go:307] overlay module found
	I0805 11:46:16.598581 2795442 out.go:97] Using the docker driver based on user configuration
	I0805 11:46:16.598606 2795442 start.go:297] selected driver: docker
	I0805 11:46:16.598614 2795442 start.go:901] validating driver "docker" against <nil>
	I0805 11:46:16.598727 2795442 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0805 11:46:16.648763 2795442 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-05 11:46:16.639425316 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0805 11:46:16.648922 2795442 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 11:46:16.649197 2795442 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0805 11:46:16.649357 2795442 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0805 11:46:16.651258 2795442 out.go:169] Using Docker driver with root privileges
	I0805 11:46:16.653083 2795442 cni.go:84] Creating CNI manager for ""
	I0805 11:46:16.653118 2795442 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 11:46:16.653129 2795442 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 11:46:16.653226 2795442 start.go:340] cluster config:
	{Name:download-only-622683 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-622683 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 11:46:16.655251 2795442 out.go:97] Starting "download-only-622683" primary control-plane node in "download-only-622683" cluster
	I0805 11:46:16.655282 2795442 cache.go:121] Beginning downloading kic base image for docker with docker
	I0805 11:46:16.656970 2795442 out.go:97] Pulling base image v0.0.44-1721902582-19326 ...
	I0805 11:46:16.657011 2795442 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 11:46:16.657057 2795442 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0805 11:46:16.672353 2795442 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0805 11:46:16.672500 2795442 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0805 11:46:16.672526 2795442 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0805 11:46:16.672538 2795442 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0805 11:46:16.672546 2795442 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0805 11:46:16.725031 2795442 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 11:46:16.725073 2795442 cache.go:56] Caching tarball of preloaded images
	I0805 11:46:16.725253 2795442 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 11:46:16.727149 2795442 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0805 11:46:16.727179 2795442 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0805 11:46:16.840075 2795442 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4?checksum=md5:5a76dba1959f6b6fc5e29e1e172ab9ca -> /home/jenkins/minikube-integration/19377-2789855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-622683 host does not exist
	  To start a cluster, run: "minikube start -p download-only-622683"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-622683
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/json-events (6.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-821646 --force --alsologtostderr --kubernetes-version=v1.31.0-rc.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-821646 --force --alsologtostderr --kubernetes-version=v1.31.0-rc.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (6.066624653s)
--- PASS: TestDownloadOnly/v1.31.0-rc.0/json-events (6.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-821646
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-821646: exit status 85 (70.742784ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-467372 | jenkins | v1.33.1 | 05 Aug 24 11:46 UTC |                     |
	|         | -p download-only-467372           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.1 | 05 Aug 24 11:46 UTC | 05 Aug 24 11:46 UTC |
	| delete  | -p download-only-467372           | download-only-467372 | jenkins | v1.33.1 | 05 Aug 24 11:46 UTC | 05 Aug 24 11:46 UTC |
	| start   | -o=json --download-only           | download-only-622683 | jenkins | v1.33.1 | 05 Aug 24 11:46 UTC |                     |
	|         | -p download-only-622683           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.1 | 05 Aug 24 11:46 UTC | 05 Aug 24 11:46 UTC |
	| delete  | -p download-only-622683           | download-only-622683 | jenkins | v1.33.1 | 05 Aug 24 11:46 UTC | 05 Aug 24 11:46 UTC |
	| start   | -o=json --download-only           | download-only-821646 | jenkins | v1.33.1 | 05 Aug 24 11:46 UTC |                     |
	|         | -p download-only-821646           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0 |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 11:46:23
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 11:46:23.631897 2795645 out.go:291] Setting OutFile to fd 1 ...
	I0805 11:46:23.632121 2795645 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 11:46:23.632152 2795645 out.go:304] Setting ErrFile to fd 2...
	I0805 11:46:23.632175 2795645 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 11:46:23.632526 2795645 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-2789855/.minikube/bin
	I0805 11:46:23.632989 2795645 out.go:298] Setting JSON to true
	I0805 11:46:23.633917 2795645 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":70135,"bootTime":1722788249,"procs":150,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0805 11:46:23.634022 2795645 start.go:139] virtualization:  
	I0805 11:46:23.637124 2795645 out.go:97] [download-only-821646] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0805 11:46:23.637394 2795645 notify.go:220] Checking for updates...
	I0805 11:46:23.639836 2795645 out.go:169] MINIKUBE_LOCATION=19377
	I0805 11:46:23.642466 2795645 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 11:46:23.645048 2795645 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19377-2789855/kubeconfig
	I0805 11:46:23.647622 2795645 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19377-2789855/.minikube
	I0805 11:46:23.650238 2795645 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0805 11:46:23.655647 2795645 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0805 11:46:23.655918 2795645 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 11:46:23.677208 2795645 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0805 11:46:23.677334 2795645 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0805 11:46:23.740994 2795645 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-05 11:46:23.730782846 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0805 11:46:23.741103 2795645 docker.go:307] overlay module found
	I0805 11:46:23.743840 2795645 out.go:97] Using the docker driver based on user configuration
	I0805 11:46:23.743866 2795645 start.go:297] selected driver: docker
	I0805 11:46:23.743885 2795645 start.go:901] validating driver "docker" against <nil>
	I0805 11:46:23.744001 2795645 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0805 11:46:23.795787 2795645 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-05 11:46:23.786704381 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0805 11:46:23.795953 2795645 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 11:46:23.796259 2795645 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0805 11:46:23.796420 2795645 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0805 11:46:23.799735 2795645 out.go:169] Using Docker driver with root privileges
	I0805 11:46:23.802204 2795645 cni.go:84] Creating CNI manager for ""
	I0805 11:46:23.802232 2795645 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 11:46:23.802253 2795645 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 11:46:23.802343 2795645 start.go:340] cluster config:
	{Name:download-only-821646 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:download-only-821646 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 11:46:23.804989 2795645 out.go:97] Starting "download-only-821646" primary control-plane node in "download-only-821646" cluster
	I0805 11:46:23.805016 2795645 cache.go:121] Beginning downloading kic base image for docker with docker
	I0805 11:46:23.807676 2795645 out.go:97] Pulling base image v0.0.44-1721902582-19326 ...
	I0805 11:46:23.807701 2795645 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0805 11:46:23.807885 2795645 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0805 11:46:23.823337 2795645 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0805 11:46:23.823455 2795645 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0805 11:46:23.823480 2795645 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0805 11:46:23.823489 2795645 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0805 11:46:23.823497 2795645 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0805 11:46:23.870174 2795645 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-rc.0/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4
	I0805 11:46:23.870211 2795645 cache.go:56] Caching tarball of preloaded images
	I0805 11:46:23.870379 2795645 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0805 11:46:23.873263 2795645 out.go:97] Downloading Kubernetes v1.31.0-rc.0 preload ...
	I0805 11:46:23.873288 2795645 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4 ...
	I0805 11:46:23.983830 2795645 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-rc.0/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4?checksum=md5:c1f196b49f29ebea060b9249b6cb8e03 -> /home/jenkins/minikube-integration/19377-2789855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4
	I0805 11:46:28.233215 2795645 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4 ...
	I0805 11:46:28.233321 2795645 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19377-2789855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-821646 host does not exist
	  To start a cluster, run: "minikube start -p download-only-821646"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-rc.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-rc.0/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-821646
--- PASS: TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds (0.17s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-211027 --alsologtostderr --binary-mirror http://127.0.0.1:46671 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-211027" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-211027
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestOffline (100s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-arm64 start -p offline-docker-737529 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-arm64 start -p offline-docker-737529 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m37.233535357s)
helpers_test.go:175: Cleaning up "offline-docker-737529" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p offline-docker-737529
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p offline-docker-737529: (2.764582711s)
--- PASS: TestOffline (100.00s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.1s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-245337
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-245337: exit status 85 (100.209778ms)

                                                
                                                
-- stdout --
	* Profile "addons-245337" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-245337"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.10s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.12s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-245337
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-245337: exit status 85 (116.39559ms)

                                                
                                                
-- stdout --
	* Profile "addons-245337" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-245337"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.12s)

                                                
                                    
x
+
TestAddons/Setup (233.41s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-245337 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-245337 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns: (3m53.406088685s)
--- PASS: TestAddons/Setup (233.41s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.98s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:905: volcano-admission stabilized in 54.70413ms
addons_test.go:913: volcano-controller stabilized in 55.888951ms
addons_test.go:897: volcano-scheduler stabilized in 56.156387ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-844f6db89b-gbljd" [55a269d5-23c7-412e-ad97-8f22fb1e9ba5] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.00433759s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5f7844f7bc-bbf86" [8f0c1a8f-9384-42c6-ba88-c7966e1d0abb] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003745187s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-59cb4746db-kfjhv" [8320d3ff-0730-4661-9240-dd7c40670a4c] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004220358s
addons_test.go:932: (dbg) Run:  kubectl --context addons-245337 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-245337 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-245337 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [20964f5e-22ca-4e45-b15e-1c4c6d6598b0] Pending
helpers_test.go:344: "test-job-nginx-0" [20964f5e-22ca-4e45-b15e-1c4c6d6598b0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [20964f5e-22ca-4e45-b15e-1c4c6d6598b0] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.003785024s
addons_test.go:968: (dbg) Run:  out/minikube-linux-arm64 -p addons-245337 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-linux-arm64 -p addons-245337 addons disable volcano --alsologtostderr -v=1: (10.334601456s)
--- PASS: TestAddons/serial/Volcano (39.98s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.23s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-245337 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-245337 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.23s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 6.137248ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-698f998955-mbw6z" [b3eb4566-61b3-4929-95e8-604f28e5fc9a] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.006308859s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-d9gfd" [e2ed6cc5-fbab-4aff-98d7-d119d624c589] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004340904s
addons_test.go:342: (dbg) Run:  kubectl --context addons-245337 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-245337 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-245337 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.025863803s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-245337 ip
2024/08/05 11:51:39 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-245337 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.02s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.11s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-245337 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-245337 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-245337 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [93a787ea-e434-4296-9c80-32372640068e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [93a787ea-e434-4296-9c80-32372640068e] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003298465s
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-245337 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-245337 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-245337 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-245337 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-arm64 -p addons-245337 addons disable ingress-dns --alsologtostderr -v=1: (1.185379021s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-245337 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-245337 addons disable ingress --alsologtostderr -v=1: (7.816003205s)
--- PASS: TestAddons/parallel/Ingress (21.11s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.81s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-hfp9t" [cd37fd18-aefb-4189-b750-7b57159311bc] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003875797s
addons_test.go:851: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-245337
addons_test.go:851: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-245337: (5.807950452s)
--- PASS: TestAddons/parallel/InspektorGadget (11.81s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.78s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.708733ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-7scgh" [c94387a9-11ad-4e33-8113-072fb3974dfd] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004520208s
addons_test.go:417: (dbg) Run:  kubectl --context addons-245337 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-245337 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.78s)

                                                
                                    
x
+
TestAddons/parallel/CSI (46.29s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 7.413556ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-245337 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245337 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245337 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-245337 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [07c7601f-4bcc-4d9f-bd4f-d8658774eafd] Pending
helpers_test.go:344: "task-pv-pod" [07c7601f-4bcc-4d9f-bd4f-d8658774eafd] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [07c7601f-4bcc-4d9f-bd4f-d8658774eafd] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.003404085s
addons_test.go:590: (dbg) Run:  kubectl --context addons-245337 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-245337 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-245337 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-245337 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-245337 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-245337 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245337 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245337 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245337 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245337 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245337 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245337 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245337 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245337 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245337 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245337 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245337 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245337 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245337 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245337 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245337 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245337 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245337 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245337 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245337 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-245337 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [04f69fba-719f-4d9b-ba18-530d0d65d93f] Pending
helpers_test.go:344: "task-pv-pod-restore" [04f69fba-719f-4d9b-ba18-530d0d65d93f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [04f69fba-719f-4d9b-ba18-530d0d65d93f] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003895993s
addons_test.go:632: (dbg) Run:  kubectl --context addons-245337 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-245337 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-245337 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-245337 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-arm64 -p addons-245337 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.767751892s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-arm64 -p addons-245337 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (46.29s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-245337 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-245337 --alsologtostderr -v=1: (1.240662309s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-9d868696f-bhm99" [7e62cb36-18a4-436f-98b7-3d6003dcf600] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-9d868696f-bhm99" [7e62cb36-18a4-436f-98b7-3d6003dcf600] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.003959214s
addons_test.go:839: (dbg) Run:  out/minikube-linux-arm64 -p addons-245337 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-arm64 -p addons-245337 addons disable headlamp --alsologtostderr -v=1: (5.773847491s)
--- PASS: TestAddons/parallel/Headlamp (19.02s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.63s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5455fb9b69-9chq9" [270deefa-7f28-4433-ac82-f9c2d857c22b] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.00339776s
addons_test.go:870: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-245337
--- PASS: TestAddons/parallel/CloudSpanner (6.63s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.53s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-245337 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-245337 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245337 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245337 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245337 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245337 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245337 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [99294212-c62f-49e7-a7a0-207c2cd47152] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [99294212-c62f-49e7-a7a0-207c2cd47152] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [99294212-c62f-49e7-a7a0-207c2cd47152] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004263089s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-245337 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-arm64 -p addons-245337 ssh "cat /opt/local-path-provisioner/pvc-171d475d-bd10-415f-9c91-c8f606b0923d_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-245337 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-245337 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 -p addons-245337 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (9.53s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.89s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-fgxr7" [f5ab8ece-a064-40a6-b038-980dd626f756] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004624402s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-245337
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.89s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.68s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-fv2p9" [a9982e54-2843-4a96-8be8-ada3367c7584] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004029325s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-arm64 -p addons-245337 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-arm64 -p addons-245337 addons disable yakd --alsologtostderr -v=1: (5.67145092s)
--- PASS: TestAddons/parallel/Yakd (11.68s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.4s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-245337
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-245337: (11.142490099s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-245337
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-245337
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-245337
--- PASS: TestAddons/StoppedEnableDisable (11.40s)

                                                
                                    
x
+
TestCertOptions (41.21s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-580608 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-580608 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (38.390779864s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-580608 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-580608 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-580608 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-580608" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-580608
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-580608: (2.181483937s)
--- PASS: TestCertOptions (41.21s)

                                                
                                    
x
+
TestCertExpiration (247.46s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-133369 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
E0805 12:36:51.860346 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/functional-644345/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-133369 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (39.830384344s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-133369 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
E0805 12:40:25.194804 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/addons-245337/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-133369 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (25.207660198s)
helpers_test.go:175: Cleaning up "cert-expiration-133369" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-133369
E0805 12:40:30.439230 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/skaffold-352302/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-133369: (2.421580835s)
--- PASS: TestCertExpiration (247.46s)

                                                
                                    
x
+
TestDockerFlags (47.59s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-arm64 start -p docker-flags-997533 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-arm64 start -p docker-flags-997533 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (44.387873755s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-997533 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-997533 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-997533" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-flags-997533
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-flags-997533: (2.358317184s)
--- PASS: TestDockerFlags (47.59s)

                                                
                                    
x
+
TestForceSystemdFlag (42.11s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-456551 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-456551 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (39.833310206s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-456551 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-456551" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-456551
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-456551: (1.937280562s)
--- PASS: TestForceSystemdFlag (42.11s)

                                                
                                    
x
+
TestForceSystemdEnv (44.19s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-837220 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0805 12:35:25.194944 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/addons-245337/client.crt: no such file or directory
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-837220 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (41.598515871s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-837220 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-837220" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-837220
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-837220: (2.262872011s)
--- PASS: TestForceSystemdEnv (44.19s)

                                                
                                    
x
+
TestErrorSpam/setup (31.28s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-279500 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-279500 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-279500 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-279500 --driver=docker  --container-runtime=docker: (31.278384385s)
--- PASS: TestErrorSpam/setup (31.28s)

                                                
                                    
x
+
TestErrorSpam/start (0.74s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-279500 --log_dir /tmp/nospam-279500 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-279500 --log_dir /tmp/nospam-279500 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-279500 --log_dir /tmp/nospam-279500 start --dry-run
--- PASS: TestErrorSpam/start (0.74s)

                                                
                                    
x
+
TestErrorSpam/status (1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-279500 --log_dir /tmp/nospam-279500 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-279500 --log_dir /tmp/nospam-279500 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-279500 --log_dir /tmp/nospam-279500 status
--- PASS: TestErrorSpam/status (1.00s)

                                                
                                    
x
+
TestErrorSpam/pause (1.3s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-279500 --log_dir /tmp/nospam-279500 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-279500 --log_dir /tmp/nospam-279500 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-279500 --log_dir /tmp/nospam-279500 pause
--- PASS: TestErrorSpam/pause (1.30s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.38s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-279500 --log_dir /tmp/nospam-279500 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-279500 --log_dir /tmp/nospam-279500 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-279500 --log_dir /tmp/nospam-279500 unpause
--- PASS: TestErrorSpam/unpause (1.38s)

                                                
                                    
x
+
TestErrorSpam/stop (11.04s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-279500 --log_dir /tmp/nospam-279500 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-279500 --log_dir /tmp/nospam-279500 stop: (10.848266387s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-279500 --log_dir /tmp/nospam-279500 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-279500 --log_dir /tmp/nospam-279500 stop
--- PASS: TestErrorSpam/stop (11.04s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19377-2789855/.minikube/files/etc/test/nested/copy/2795233/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (87.28s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-644345 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-644345 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m27.279705216s)
--- PASS: TestFunctional/serial/StartWithProxy (87.28s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (35.47s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-644345 --alsologtostderr -v=8
E0805 11:55:25.195218 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/addons-245337/client.crt: no such file or directory
E0805 11:55:25.202109 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/addons-245337/client.crt: no such file or directory
E0805 11:55:25.212517 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/addons-245337/client.crt: no such file or directory
E0805 11:55:25.232992 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/addons-245337/client.crt: no such file or directory
E0805 11:55:25.273257 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/addons-245337/client.crt: no such file or directory
E0805 11:55:25.353644 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/addons-245337/client.crt: no such file or directory
E0805 11:55:25.514139 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/addons-245337/client.crt: no such file or directory
E0805 11:55:25.834679 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/addons-245337/client.crt: no such file or directory
E0805 11:55:26.474900 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/addons-245337/client.crt: no such file or directory
E0805 11:55:27.755143 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/addons-245337/client.crt: no such file or directory
E0805 11:55:30.315789 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/addons-245337/client.crt: no such file or directory
E0805 11:55:35.436007 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/addons-245337/client.crt: no such file or directory
E0805 11:55:45.676854 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/addons-245337/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-644345 --alsologtostderr -v=8: (35.460844536s)
functional_test.go:659: soft start took 35.466315276s for "functional-644345" cluster.
--- PASS: TestFunctional/serial/SoftStart (35.47s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-644345 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-644345 cache add registry.k8s.io/pause:3.1: (1.080170541s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-644345 cache add registry.k8s.io/pause:3.3: (1.204356775s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-644345 cache add registry.k8s.io/pause:latest: (1.031213355s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-644345 /tmp/TestFunctionalserialCacheCmdcacheadd_local1194279572/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 cache add minikube-local-cache-test:functional-644345
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 cache delete minikube-local-cache-test:functional-644345
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-644345
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.64s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-644345 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (298.04626ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.64s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 kubectl -- --context functional-644345 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-644345 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (44.6s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-644345 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0805 11:56:06.157712 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/addons-245337/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-644345 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (44.595967138s)
functional_test.go:757: restart took 44.596085339s for "functional-644345" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (44.60s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-644345 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.27s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-644345 logs: (1.270742642s)
--- PASS: TestFunctional/serial/LogsCmd (1.27s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.24s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 logs --file /tmp/TestFunctionalserialLogsFileCmd3912893320/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-644345 logs --file /tmp/TestFunctionalserialLogsFileCmd3912893320/001/logs.txt: (1.237096272s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.24s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.09s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-644345 apply -f testdata/invalidsvc.yaml
E0805 11:56:47.117916 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/addons-245337/client.crt: no such file or directory
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-644345
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-644345: exit status 115 (593.322939ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32196 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-644345 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-644345 delete -f testdata/invalidsvc.yaml: (1.130212361s)
--- PASS: TestFunctional/serial/InvalidService (5.09s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-644345 config get cpus: exit status 14 (87.644766ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-644345 config get cpus: exit status 14 (66.769347ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-644345 --alsologtostderr -v=1]
2024/08/05 12:00:46 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-644345 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2837129: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.86s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-644345 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-644345 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (172.008112ms)

                                                
                                                
-- stdout --
	* [functional-644345] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19377-2789855/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19377-2789855/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 12:00:35.639976 2836887 out.go:291] Setting OutFile to fd 1 ...
	I0805 12:00:35.640110 2836887 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 12:00:35.640121 2836887 out.go:304] Setting ErrFile to fd 2...
	I0805 12:00:35.640126 2836887 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 12:00:35.640398 2836887 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-2789855/.minikube/bin
	I0805 12:00:35.640759 2836887 out.go:298] Setting JSON to false
	I0805 12:00:35.641861 2836887 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":70987,"bootTime":1722788249,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0805 12:00:35.641934 2836887 start.go:139] virtualization:  
	I0805 12:00:35.644002 2836887 out.go:177] * [functional-644345] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0805 12:00:35.646407 2836887 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 12:00:35.646559 2836887 notify.go:220] Checking for updates...
	I0805 12:00:35.649928 2836887 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 12:00:35.651438 2836887 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19377-2789855/kubeconfig
	I0805 12:00:35.653425 2836887 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19377-2789855/.minikube
	I0805 12:00:35.655009 2836887 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0805 12:00:35.656543 2836887 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 12:00:35.658613 2836887 config.go:182] Loaded profile config "functional-644345": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 12:00:35.659146 2836887 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 12:00:35.681984 2836887 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0805 12:00:35.682116 2836887 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0805 12:00:35.753477 2836887 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:51 SystemTime:2024-08-05 12:00:35.742248481 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0805 12:00:35.753652 2836887 docker.go:307] overlay module found
	I0805 12:00:35.756554 2836887 out.go:177] * Using the docker driver based on existing profile
	I0805 12:00:35.758187 2836887 start.go:297] selected driver: docker
	I0805 12:00:35.758210 2836887 start.go:901] validating driver "docker" against &{Name:functional-644345 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-644345 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:00:35.758337 2836887 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 12:00:35.760721 2836887 out.go:177] 
	W0805 12:00:35.762252 2836887 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0805 12:00:35.763951 2836887 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-644345 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-644345 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-644345 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (178.919688ms)

                                                
                                                
-- stdout --
	* [functional-644345] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19377-2789855/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19377-2789855/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 12:00:35.467463 2836843 out.go:291] Setting OutFile to fd 1 ...
	I0805 12:00:35.467645 2836843 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 12:00:35.467676 2836843 out.go:304] Setting ErrFile to fd 2...
	I0805 12:00:35.467696 2836843 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 12:00:35.468726 2836843 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-2789855/.minikube/bin
	I0805 12:00:35.469188 2836843 out.go:298] Setting JSON to false
	I0805 12:00:35.470243 2836843 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":70987,"bootTime":1722788249,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0805 12:00:35.470349 2836843 start.go:139] virtualization:  
	I0805 12:00:35.473117 2836843 out.go:177] * [functional-644345] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	I0805 12:00:35.474988 2836843 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 12:00:35.475119 2836843 notify.go:220] Checking for updates...
	I0805 12:00:35.478894 2836843 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 12:00:35.480466 2836843 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19377-2789855/kubeconfig
	I0805 12:00:35.482214 2836843 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19377-2789855/.minikube
	I0805 12:00:35.484039 2836843 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0805 12:00:35.485682 2836843 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 12:00:35.488064 2836843 config.go:182] Loaded profile config "functional-644345": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 12:00:35.488685 2836843 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 12:00:35.523717 2836843 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0805 12:00:35.523830 2836843 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0805 12:00:35.582419 2836843 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:51 SystemTime:2024-08-05 12:00:35.572684291 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0805 12:00:35.582529 2836843 docker.go:307] overlay module found
	I0805 12:00:35.584825 2836843 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0805 12:00:35.586518 2836843 start.go:297] selected driver: docker
	I0805 12:00:35.586536 2836843 start.go:901] validating driver "docker" against &{Name:functional-644345 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-644345 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 12:00:35.586633 2836843 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 12:00:35.589038 2836843 out.go:177] 
	W0805 12:00:35.590948 2836843 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0805 12:00:35.592690 2836843 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-644345 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-644345 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6f49f58cd5-7td66" [9e595247-5ca4-4375-90e6-d53c5b064cb4] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-6f49f58cd5-7td66" [9e595247-5ca4-4375-90e6-d53c5b064cb4] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.003751655s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.49.2:32568
functional_test.go:1671: http://192.168.49.2:32568: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-6f49f58cd5-7td66

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32568
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.65s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 ssh -n functional-644345 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 cp functional-644345:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3952485624/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 ssh -n functional-644345 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 ssh -n functional-644345 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.39s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/2795233/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 ssh "sudo cat /etc/test/nested/copy/2795233/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/2795233.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 ssh "sudo cat /etc/ssl/certs/2795233.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/2795233.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 ssh "sudo cat /usr/share/ca-certificates/2795233.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/27952332.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 ssh "sudo cat /etc/ssl/certs/27952332.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/27952332.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 ssh "sudo cat /usr/share/ca-certificates/27952332.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-644345 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-644345 ssh "sudo systemctl is-active crio": exit status 1 (266.215557ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-644345 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-644345 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-644345 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-644345 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2832231: os: process already finished
helpers_test.go:508: unable to kill pid 2832038: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-644345 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-644345 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-644345 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-65f5d5cc78-dq6cj" [04214fb4-d16e-4813-9624-eaeb02b0d2d3] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-65f5d5cc78-dq6cj" [04214fb4-d16e-4813-9624-eaeb02b0d2d3] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.004476374s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 service list -o json
functional_test.go:1490: Took "512.221814ms" to run "out/minikube-linux-arm64 -p functional-644345 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.49.2:31345
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.49.2:31345
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1311: Took "335.418229ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1325: Took "61.003423ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1362: Took "329.545334ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1375: Took "63.428728ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-644345 /tmp/TestFunctionalparallelMountCmdany-port3587498908/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1722859224422428082" to /tmp/TestFunctionalparallelMountCmdany-port3587498908/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1722859224422428082" to /tmp/TestFunctionalparallelMountCmdany-port3587498908/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1722859224422428082" to /tmp/TestFunctionalparallelMountCmdany-port3587498908/001/test-1722859224422428082
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug  5 12:00 created-by-test
-rw-r--r-- 1 docker docker 24 Aug  5 12:00 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug  5 12:00 test-1722859224422428082
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 ssh cat /mount-9p/test-1722859224422428082
E0805 12:00:25.194926 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/addons-245337/client.crt: no such file or directory
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-644345 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [829e5604-1803-45b6-9c4e-2c7043a695dd] Pending
helpers_test.go:344: "busybox-mount" [829e5604-1803-45b6-9c4e-2c7043a695dd] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [829e5604-1803-45b6-9c4e-2c7043a695dd] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [829e5604-1803-45b6-9c4e-2c7043a695dd] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003805952s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-644345 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-644345 /tmp/TestFunctionalparallelMountCmdany-port3587498908/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.24s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-644345 /tmp/TestFunctionalparallelMountCmdspecific-port2083755802/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-644345 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (361.981936ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-644345 /tmp/TestFunctionalparallelMountCmdspecific-port2083755802/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-644345 ssh "sudo umount -f /mount-9p": exit status 1 (283.818754ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-644345 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-644345 /tmp/TestFunctionalparallelMountCmdspecific-port2083755802/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.66s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-644345 /tmp/TestFunctionalparallelMountCmdVerifyCleanup88654712/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-644345 /tmp/TestFunctionalparallelMountCmdVerifyCleanup88654712/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-644345 /tmp/TestFunctionalparallelMountCmdVerifyCleanup88654712/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-644345 ssh "findmnt -T" /mount1: exit status 1 (552.902485ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-644345 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-644345 /tmp/TestFunctionalparallelMountCmdVerifyCleanup88654712/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-644345 /tmp/TestFunctionalparallelMountCmdVerifyCleanup88654712/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-644345 /tmp/TestFunctionalparallelMountCmdVerifyCleanup88654712/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-644345 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-644345
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-644345
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-644345 image ls --format short --alsologtostderr:
I0805 12:00:57.145903 2838954 out.go:291] Setting OutFile to fd 1 ...
I0805 12:00:57.146130 2838954 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 12:00:57.146152 2838954 out.go:304] Setting ErrFile to fd 2...
I0805 12:00:57.146177 2838954 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 12:00:57.146446 2838954 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-2789855/.minikube/bin
I0805 12:00:57.147154 2838954 config.go:182] Loaded profile config "functional-644345": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0805 12:00:57.147341 2838954 config.go:182] Loaded profile config "functional-644345": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0805 12:00:57.147899 2838954 cli_runner.go:164] Run: docker container inspect functional-644345 --format={{.State.Status}}
I0805 12:00:57.176317 2838954 ssh_runner.go:195] Run: systemctl --version
I0805 12:00:57.176384 2838954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644345
I0805 12:00:57.194322 2838954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36443 SSHKeyPath:/home/jenkins/minikube-integration/19377-2789855/.minikube/machines/functional-644345/id_rsa Username:docker}
I0805 12:00:57.288995 2838954 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-644345 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/kube-proxy                  | v1.30.3           | 2351f570ed0ea | 87.9MB |
| registry.k8s.io/etcd                        | 3.5.12-0          | 014faa467e297 | 139MB  |
| registry.k8s.io/coredns/coredns             | v1.11.1           | 2437cf7621777 | 57.4MB |
| docker.io/kicbase/echo-server               | functional-644345 | ce2d2cda2d858 | 4.78MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/kube-apiserver              | v1.30.3           | 61773190d42ff | 112MB  |
| registry.k8s.io/kube-scheduler              | v1.30.3           | d48f992a22722 | 60.5MB |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| docker.io/localhost/my-image                | functional-644345 | 7edf3380f4647 | 1.41MB |
| docker.io/library/minikube-local-cache-test | functional-644345 | d2853d217a368 | 30B    |
| registry.k8s.io/kube-controller-manager     | v1.30.3           | 8e97cdb19e7cc | 107MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-644345 image ls --format table --alsologtostderr:
I0805 12:01:00.488109 2839286 out.go:291] Setting OutFile to fd 1 ...
I0805 12:01:00.488323 2839286 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 12:01:00.488331 2839286 out.go:304] Setting ErrFile to fd 2...
I0805 12:01:00.488359 2839286 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 12:01:00.488664 2839286 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-2789855/.minikube/bin
I0805 12:01:00.489406 2839286 config.go:182] Loaded profile config "functional-644345": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0805 12:01:00.489573 2839286 config.go:182] Loaded profile config "functional-644345": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0805 12:01:00.490216 2839286 cli_runner.go:164] Run: docker container inspect functional-644345 --format={{.State.Status}}
I0805 12:01:00.511195 2839286 ssh_runner.go:195] Run: systemctl --version
I0805 12:01:00.511270 2839286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644345
I0805 12:01:00.539200 2839286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36443 SSHKeyPath:/home/jenkins/minikube-integration/19377-2789855/.minikube/machines/functional-644345/id_rsa Username:docker}
I0805 12:01:00.645556 2839286 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-644345 image ls --format json --alsologtostderr:
[{"id":"61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"112000000"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"7edf3380f46476a5083af157a4da0fac402058a5fe63e33d9f661e1b7d41ae02","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-644345"],"size":"1410000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be","repoDigests"
:[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"87900000"},{"id":"014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"139000000"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"57400000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/ec
hoserver-arm:1.8"],"size":"85000000"},{"id":"d2853d217a3682e508a53a0a3cd3400c128b59d376e2b2a731dadb230367cc12","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-644345"],"size":"30"},{"id":"d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"60500000"},{"id":"8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"107000000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-644345"],"size":"4780000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-644345 image ls --format json --alsologtostderr:
I0805 12:00:59.940978 2839254 out.go:291] Setting OutFile to fd 1 ...
I0805 12:00:59.941180 2839254 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 12:00:59.941210 2839254 out.go:304] Setting ErrFile to fd 2...
I0805 12:00:59.941232 2839254 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 12:00:59.941527 2839254 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-2789855/.minikube/bin
I0805 12:00:59.942223 2839254 config.go:182] Loaded profile config "functional-644345": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0805 12:00:59.942496 2839254 config.go:182] Loaded profile config "functional-644345": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0805 12:00:59.943032 2839254 cli_runner.go:164] Run: docker container inspect functional-644345 --format={{.State.Status}}
I0805 12:00:59.962005 2839254 ssh_runner.go:195] Run: systemctl --version
I0805 12:00:59.962069 2839254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644345
I0805 12:00:59.981780 2839254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36443 SSHKeyPath:/home/jenkins/minikube-integration/19377-2789855/.minikube/machines/functional-644345/id_rsa Username:docker}
I0805 12:01:00.129818 2839254 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-644345 image ls --format yaml --alsologtostderr:
- id: 61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "112000000"
- id: 2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "87900000"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "57400000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: d2853d217a3682e508a53a0a3cd3400c128b59d376e2b2a731dadb230367cc12
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-644345
size: "30"
- id: 014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "139000000"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "107000000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "60500000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-644345
size: "4780000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-644345 image ls --format yaml --alsologtostderr:
I0805 12:00:57.376457 2838987 out.go:291] Setting OutFile to fd 1 ...
I0805 12:00:57.376641 2838987 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 12:00:57.376671 2838987 out.go:304] Setting ErrFile to fd 2...
I0805 12:00:57.376696 2838987 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 12:00:57.376958 2838987 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-2789855/.minikube/bin
I0805 12:00:57.377611 2838987 config.go:182] Loaded profile config "functional-644345": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0805 12:00:57.377790 2838987 config.go:182] Loaded profile config "functional-644345": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0805 12:00:57.378318 2838987 cli_runner.go:164] Run: docker container inspect functional-644345 --format={{.State.Status}}
I0805 12:00:57.396936 2838987 ssh_runner.go:195] Run: systemctl --version
I0805 12:00:57.396996 2838987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644345
I0805 12:00:57.414952 2838987 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36443 SSHKeyPath:/home/jenkins/minikube-integration/19377-2789855/.minikube/machines/functional-644345/id_rsa Username:docker}
I0805 12:00:57.513186 2838987 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-644345 ssh pgrep buildkitd: exit status 1 (278.74238ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 image build -t localhost/my-image:functional-644345 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-644345 image build -t localhost/my-image:functional-644345 testdata/build --alsologtostderr: (1.856705965s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-644345 image build -t localhost/my-image:functional-644345 testdata/build --alsologtostderr:
I0805 12:00:57.871177 2839077 out.go:291] Setting OutFile to fd 1 ...
I0805 12:00:57.872060 2839077 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 12:00:57.872074 2839077 out.go:304] Setting ErrFile to fd 2...
I0805 12:00:57.872079 2839077 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 12:00:57.872391 2839077 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-2789855/.minikube/bin
I0805 12:00:57.873081 2839077 config.go:182] Loaded profile config "functional-644345": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0805 12:00:57.875367 2839077 config.go:182] Loaded profile config "functional-644345": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0805 12:00:57.876027 2839077 cli_runner.go:164] Run: docker container inspect functional-644345 --format={{.State.Status}}
I0805 12:00:57.893126 2839077 ssh_runner.go:195] Run: systemctl --version
I0805 12:00:57.893227 2839077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644345
I0805 12:00:57.910636 2839077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36443 SSHKeyPath:/home/jenkins/minikube-integration/19377-2789855/.minikube/machines/functional-644345/id_rsa Username:docker}
I0805 12:00:58.015432 2839077 build_images.go:161] Building image from path: /tmp/build.451359692.tar
I0805 12:00:58.015516 2839077 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0805 12:00:58.025656 2839077 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.451359692.tar
I0805 12:00:58.029626 2839077 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.451359692.tar: stat -c "%s %y" /var/lib/minikube/build/build.451359692.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.451359692.tar': No such file or directory
I0805 12:00:58.029671 2839077 ssh_runner.go:362] scp /tmp/build.451359692.tar --> /var/lib/minikube/build/build.451359692.tar (3072 bytes)
I0805 12:00:58.059920 2839077 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.451359692
I0805 12:00:58.069899 2839077 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.451359692 -xf /var/lib/minikube/build/build.451359692.tar
I0805 12:00:58.080143 2839077 docker.go:360] Building image: /var/lib/minikube/build/build.451359692
I0805 12:00:58.080311 2839077 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-644345 /var/lib/minikube/build/build.451359692
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.7s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.1s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:7edf3380f46476a5083af157a4da0fac402058a5fe63e33d9f661e1b7d41ae02 done
#8 naming to localhost/my-image:functional-644345 done
#8 DONE 0.1s
I0805 12:00:59.649257 2839077 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-644345 /var/lib/minikube/build/build.451359692: (1.568920677s)
I0805 12:00:59.649331 2839077 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.451359692
I0805 12:00:59.658765 2839077 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.451359692.tar
I0805 12:00:59.670986 2839077 build_images.go:217] Built localhost/my-image:functional-644345 from /tmp/build.451359692.tar
I0805 12:00:59.671016 2839077 build_images.go:133] succeeded building to: functional-644345
I0805 12:00:59.671021 2839077 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-644345
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 image load --daemon docker.io/kicbase/echo-server:functional-644345 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 image load --daemon docker.io/kicbase/echo-server:functional-644345 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-644345
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 image load --daemon docker.io/kicbase/echo-server:functional-644345 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 image save docker.io/kicbase/echo-server:functional-644345 /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 image rm docker.io/kicbase/echo-server:functional-644345 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-644345
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 image save --daemon docker.io/kicbase/echo-server:functional-644345 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-644345
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-644345 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-644345 docker-env) && out/minikube-linux-arm64 status -p functional-644345"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-644345 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-644345 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-644345
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-644345
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-644345
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (134.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-799243 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-799243 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (2m13.369657284s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (134.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (83.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-799243 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-799243 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-799243 -- rollout status deployment/busybox: (3.401631004s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-799243 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-799243 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-799243 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-799243 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-799243 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-799243 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
E0805 12:05:25.194527 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/addons-245337/client.crt: no such file or directory
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-799243 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-799243 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-799243 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-799243 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-799243 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-799243 -- exec busybox-fc5497c4f-h4g5k -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-799243 -- exec busybox-fc5497c4f-sgw7d -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-799243 -- exec busybox-fc5497c4f-wrkjh -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-799243 -- exec busybox-fc5497c4f-h4g5k -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-799243 -- exec busybox-fc5497c4f-sgw7d -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-799243 -- exec busybox-fc5497c4f-wrkjh -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-799243 -- exec busybox-fc5497c4f-h4g5k -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-799243 -- exec busybox-fc5497c4f-sgw7d -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-799243 -- exec busybox-fc5497c4f-wrkjh -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (83.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-799243 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-799243 -- exec busybox-fc5497c4f-h4g5k -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-799243 -- exec busybox-fc5497c4f-h4g5k -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-799243 -- exec busybox-fc5497c4f-sgw7d -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-799243 -- exec busybox-fc5497c4f-sgw7d -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-799243 -- exec busybox-fc5497c4f-wrkjh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-799243 -- exec busybox-fc5497c4f-wrkjh -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (27.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-799243 -v=7 --alsologtostderr
E0805 12:06:51.860713 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/functional-644345/client.crt: no such file or directory
E0805 12:06:51.866595 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/functional-644345/client.crt: no such file or directory
E0805 12:06:51.877113 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/functional-644345/client.crt: no such file or directory
E0805 12:06:51.897373 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/functional-644345/client.crt: no such file or directory
E0805 12:06:51.937617 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/functional-644345/client.crt: no such file or directory
E0805 12:06:52.017866 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/functional-644345/client.crt: no such file or directory
E0805 12:06:52.178994 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/functional-644345/client.crt: no such file or directory
E0805 12:06:52.499916 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/functional-644345/client.crt: no such file or directory
E0805 12:06:53.140762 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/functional-644345/client.crt: no such file or directory
E0805 12:06:54.421920 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/functional-644345/client.crt: no such file or directory
E0805 12:06:56.982815 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/functional-644345/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-799243 -v=7 --alsologtostderr: (26.841190518s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-799243 status -v=7 --alsologtostderr: (1.049299471s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (27.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-799243 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-799243 status --output json -v=7 --alsologtostderr: (1.279268816s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 cp testdata/cp-test.txt ha-799243:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 ssh -n ha-799243 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 cp ha-799243:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3829663320/001/cp-test_ha-799243.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 ssh -n ha-799243 "sudo cat /home/docker/cp-test.txt"
E0805 12:07:02.103841 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/functional-644345/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 cp ha-799243:/home/docker/cp-test.txt ha-799243-m02:/home/docker/cp-test_ha-799243_ha-799243-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 ssh -n ha-799243 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 ssh -n ha-799243-m02 "sudo cat /home/docker/cp-test_ha-799243_ha-799243-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 cp ha-799243:/home/docker/cp-test.txt ha-799243-m03:/home/docker/cp-test_ha-799243_ha-799243-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 ssh -n ha-799243 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 ssh -n ha-799243-m03 "sudo cat /home/docker/cp-test_ha-799243_ha-799243-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 cp ha-799243:/home/docker/cp-test.txt ha-799243-m04:/home/docker/cp-test_ha-799243_ha-799243-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 ssh -n ha-799243 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 ssh -n ha-799243-m04 "sudo cat /home/docker/cp-test_ha-799243_ha-799243-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 cp testdata/cp-test.txt ha-799243-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 ssh -n ha-799243-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 cp ha-799243-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3829663320/001/cp-test_ha-799243-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 ssh -n ha-799243-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 cp ha-799243-m02:/home/docker/cp-test.txt ha-799243:/home/docker/cp-test_ha-799243-m02_ha-799243.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 ssh -n ha-799243-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 ssh -n ha-799243 "sudo cat /home/docker/cp-test_ha-799243-m02_ha-799243.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 cp ha-799243-m02:/home/docker/cp-test.txt ha-799243-m03:/home/docker/cp-test_ha-799243-m02_ha-799243-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 ssh -n ha-799243-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 ssh -n ha-799243-m03 "sudo cat /home/docker/cp-test_ha-799243-m02_ha-799243-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 cp ha-799243-m02:/home/docker/cp-test.txt ha-799243-m04:/home/docker/cp-test_ha-799243-m02_ha-799243-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 ssh -n ha-799243-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 ssh -n ha-799243-m04 "sudo cat /home/docker/cp-test_ha-799243-m02_ha-799243-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 cp testdata/cp-test.txt ha-799243-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 ssh -n ha-799243-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 cp ha-799243-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3829663320/001/cp-test_ha-799243-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 ssh -n ha-799243-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 cp ha-799243-m03:/home/docker/cp-test.txt ha-799243:/home/docker/cp-test_ha-799243-m03_ha-799243.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 ssh -n ha-799243-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 ssh -n ha-799243 "sudo cat /home/docker/cp-test_ha-799243-m03_ha-799243.txt"
E0805 12:07:12.344076 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/functional-644345/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 cp ha-799243-m03:/home/docker/cp-test.txt ha-799243-m02:/home/docker/cp-test_ha-799243-m03_ha-799243-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 ssh -n ha-799243-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 ssh -n ha-799243-m02 "sudo cat /home/docker/cp-test_ha-799243-m03_ha-799243-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 cp ha-799243-m03:/home/docker/cp-test.txt ha-799243-m04:/home/docker/cp-test_ha-799243-m03_ha-799243-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 ssh -n ha-799243-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 ssh -n ha-799243-m04 "sudo cat /home/docker/cp-test_ha-799243-m03_ha-799243-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 cp testdata/cp-test.txt ha-799243-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 ssh -n ha-799243-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 cp ha-799243-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3829663320/001/cp-test_ha-799243-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 ssh -n ha-799243-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 cp ha-799243-m04:/home/docker/cp-test.txt ha-799243:/home/docker/cp-test_ha-799243-m04_ha-799243.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 ssh -n ha-799243-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 ssh -n ha-799243 "sudo cat /home/docker/cp-test_ha-799243-m04_ha-799243.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 cp ha-799243-m04:/home/docker/cp-test.txt ha-799243-m02:/home/docker/cp-test_ha-799243-m04_ha-799243-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 ssh -n ha-799243-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 ssh -n ha-799243-m02 "sudo cat /home/docker/cp-test_ha-799243-m04_ha-799243-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 cp ha-799243-m04:/home/docker/cp-test.txt ha-799243-m03:/home/docker/cp-test_ha-799243-m04_ha-799243-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 ssh -n ha-799243-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 ssh -n ha-799243-m03 "sudo cat /home/docker/cp-test_ha-799243-m04_ha-799243-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-799243 node stop m02 -v=7 --alsologtostderr: (11.060415534s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-799243 status -v=7 --alsologtostderr: exit status 7 (752.515191ms)

                                                
                                                
-- stdout --
	ha-799243
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-799243-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-799243-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-799243-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 12:07:30.854664 2863154 out.go:291] Setting OutFile to fd 1 ...
	I0805 12:07:30.854836 2863154 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 12:07:30.854845 2863154 out.go:304] Setting ErrFile to fd 2...
	I0805 12:07:30.854850 2863154 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 12:07:30.855147 2863154 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-2789855/.minikube/bin
	I0805 12:07:30.855337 2863154 out.go:298] Setting JSON to false
	I0805 12:07:30.855375 2863154 mustload.go:65] Loading cluster: ha-799243
	I0805 12:07:30.855432 2863154 notify.go:220] Checking for updates...
	I0805 12:07:30.855808 2863154 config.go:182] Loaded profile config "ha-799243": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 12:07:30.855827 2863154 status.go:255] checking status of ha-799243 ...
	I0805 12:07:30.856448 2863154 cli_runner.go:164] Run: docker container inspect ha-799243 --format={{.State.Status}}
	I0805 12:07:30.875814 2863154 status.go:330] ha-799243 host status = "Running" (err=<nil>)
	I0805 12:07:30.875855 2863154 host.go:66] Checking if "ha-799243" exists ...
	I0805 12:07:30.876320 2863154 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-799243
	I0805 12:07:30.895315 2863154 host.go:66] Checking if "ha-799243" exists ...
	I0805 12:07:30.895802 2863154 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 12:07:30.895913 2863154 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-799243
	I0805 12:07:30.923489 2863154 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36448 SSHKeyPath:/home/jenkins/minikube-integration/19377-2789855/.minikube/machines/ha-799243/id_rsa Username:docker}
	I0805 12:07:31.022593 2863154 ssh_runner.go:195] Run: systemctl --version
	I0805 12:07:31.027326 2863154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 12:07:31.041339 2863154 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0805 12:07:31.107082 2863154 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:71 SystemTime:2024-08-05 12:07:31.094810045 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0805 12:07:31.107735 2863154 kubeconfig.go:125] found "ha-799243" server: "https://192.168.49.254:8443"
	I0805 12:07:31.107773 2863154 api_server.go:166] Checking apiserver status ...
	I0805 12:07:31.107829 2863154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:07:31.122068 2863154 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2403/cgroup
	I0805 12:07:31.134171 2863154 api_server.go:182] apiserver freezer: "11:freezer:/docker/1b6d0c84aa6c5b3e9f0b5eab445a0ee3045ca1753816a96ff1dda7f709bb71a7/kubepods/burstable/pod02ee4ea5c12d81fb1b9d296c23a79de2/af74aad5c01e09e85424f0b3161ead2129a6d8942d3704a21b4dbabbd6742fcf"
	I0805 12:07:31.134254 2863154 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/1b6d0c84aa6c5b3e9f0b5eab445a0ee3045ca1753816a96ff1dda7f709bb71a7/kubepods/burstable/pod02ee4ea5c12d81fb1b9d296c23a79de2/af74aad5c01e09e85424f0b3161ead2129a6d8942d3704a21b4dbabbd6742fcf/freezer.state
	I0805 12:07:31.144092 2863154 api_server.go:204] freezer state: "THAWED"
	I0805 12:07:31.144121 2863154 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0805 12:07:31.152462 2863154 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0805 12:07:31.152491 2863154 status.go:422] ha-799243 apiserver status = Running (err=<nil>)
	I0805 12:07:31.152503 2863154 status.go:257] ha-799243 status: &{Name:ha-799243 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 12:07:31.152533 2863154 status.go:255] checking status of ha-799243-m02 ...
	I0805 12:07:31.152855 2863154 cli_runner.go:164] Run: docker container inspect ha-799243-m02 --format={{.State.Status}}
	I0805 12:07:31.173743 2863154 status.go:330] ha-799243-m02 host status = "Stopped" (err=<nil>)
	I0805 12:07:31.173767 2863154 status.go:343] host is not running, skipping remaining checks
	I0805 12:07:31.173775 2863154 status.go:257] ha-799243-m02 status: &{Name:ha-799243-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 12:07:31.173796 2863154 status.go:255] checking status of ha-799243-m03 ...
	I0805 12:07:31.174117 2863154 cli_runner.go:164] Run: docker container inspect ha-799243-m03 --format={{.State.Status}}
	I0805 12:07:31.190678 2863154 status.go:330] ha-799243-m03 host status = "Running" (err=<nil>)
	I0805 12:07:31.190703 2863154 host.go:66] Checking if "ha-799243-m03" exists ...
	I0805 12:07:31.191023 2863154 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-799243-m03
	I0805 12:07:31.208380 2863154 host.go:66] Checking if "ha-799243-m03" exists ...
	I0805 12:07:31.208747 2863154 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 12:07:31.208803 2863154 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-799243-m03
	I0805 12:07:31.227411 2863154 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36458 SSHKeyPath:/home/jenkins/minikube-integration/19377-2789855/.minikube/machines/ha-799243-m03/id_rsa Username:docker}
	I0805 12:07:31.321724 2863154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 12:07:31.334348 2863154 kubeconfig.go:125] found "ha-799243" server: "https://192.168.49.254:8443"
	I0805 12:07:31.334386 2863154 api_server.go:166] Checking apiserver status ...
	I0805 12:07:31.334428 2863154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:07:31.347616 2863154 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2261/cgroup
	I0805 12:07:31.358268 2863154 api_server.go:182] apiserver freezer: "11:freezer:/docker/27472692341a26af0813940b4b3bfccd588dfa664e01bcb0927b2c31a76e33f5/kubepods/burstable/pod3cc461f3546a6b05ba6790f5b6776f9e/e12aba35514bcc74a937a5722f8e95f58c0b3c455ac4046098fa2a0216ef7bcf"
	I0805 12:07:31.358343 2863154 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/27472692341a26af0813940b4b3bfccd588dfa664e01bcb0927b2c31a76e33f5/kubepods/burstable/pod3cc461f3546a6b05ba6790f5b6776f9e/e12aba35514bcc74a937a5722f8e95f58c0b3c455ac4046098fa2a0216ef7bcf/freezer.state
	I0805 12:07:31.368290 2863154 api_server.go:204] freezer state: "THAWED"
	I0805 12:07:31.368321 2863154 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0805 12:07:31.376062 2863154 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0805 12:07:31.376112 2863154 status.go:422] ha-799243-m03 apiserver status = Running (err=<nil>)
	I0805 12:07:31.376127 2863154 status.go:257] ha-799243-m03 status: &{Name:ha-799243-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 12:07:31.376153 2863154 status.go:255] checking status of ha-799243-m04 ...
	I0805 12:07:31.376548 2863154 cli_runner.go:164] Run: docker container inspect ha-799243-m04 --format={{.State.Status}}
	I0805 12:07:31.394079 2863154 status.go:330] ha-799243-m04 host status = "Running" (err=<nil>)
	I0805 12:07:31.394110 2863154 host.go:66] Checking if "ha-799243-m04" exists ...
	I0805 12:07:31.394504 2863154 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-799243-m04
	I0805 12:07:31.412328 2863154 host.go:66] Checking if "ha-799243-m04" exists ...
	I0805 12:07:31.412633 2863154 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 12:07:31.412682 2863154 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-799243-m04
	I0805 12:07:31.431346 2863154 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36463 SSHKeyPath:/home/jenkins/minikube-integration/19377-2789855/.minikube/machines/ha-799243-m04/id_rsa Username:docker}
	I0805 12:07:31.525587 2863154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 12:07:31.537218 2863154 status.go:257] ha-799243-m04 status: &{Name:ha-799243-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (31.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 node start m02 -v=7 --alsologtostderr
E0805 12:07:32.825199 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/functional-644345/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-799243 node start m02 -v=7 --alsologtostderr: (30.498365442s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (31.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (16.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E0805 12:08:13.786481 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/functional-644345/client.crt: no such file or directory
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (16.487033265s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (16.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (169.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-799243 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-799243 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-799243 -v=7 --alsologtostderr: (34.354446909s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-799243 --wait=true -v=7 --alsologtostderr
E0805 12:09:35.707015 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/functional-644345/client.crt: no such file or directory
E0805 12:10:25.194140 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/addons-245337/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-799243 --wait=true -v=7 --alsologtostderr: (2m15.090603522s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-799243
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (169.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-799243 node delete m03 -v=7 --alsologtostderr: (11.297759558s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 stop -v=7 --alsologtostderr
E0805 12:11:48.239724 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/addons-245337/client.crt: no such file or directory
E0805 12:11:51.859858 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/functional-644345/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-799243 stop -v=7 --alsologtostderr: (32.692158971s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-799243 status -v=7 --alsologtostderr: exit status 7 (112.29215ms)

                                                
                                                
-- stdout --
	ha-799243
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-799243-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-799243-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 12:11:55.397501 2889418 out.go:291] Setting OutFile to fd 1 ...
	I0805 12:11:55.397628 2889418 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 12:11:55.397639 2889418 out.go:304] Setting ErrFile to fd 2...
	I0805 12:11:55.397645 2889418 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 12:11:55.397866 2889418 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-2789855/.minikube/bin
	I0805 12:11:55.398064 2889418 out.go:298] Setting JSON to false
	I0805 12:11:55.398094 2889418 mustload.go:65] Loading cluster: ha-799243
	I0805 12:11:55.398214 2889418 notify.go:220] Checking for updates...
	I0805 12:11:55.398507 2889418 config.go:182] Loaded profile config "ha-799243": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 12:11:55.398518 2889418 status.go:255] checking status of ha-799243 ...
	I0805 12:11:55.398967 2889418 cli_runner.go:164] Run: docker container inspect ha-799243 --format={{.State.Status}}
	I0805 12:11:55.417125 2889418 status.go:330] ha-799243 host status = "Stopped" (err=<nil>)
	I0805 12:11:55.417151 2889418 status.go:343] host is not running, skipping remaining checks
	I0805 12:11:55.417159 2889418 status.go:257] ha-799243 status: &{Name:ha-799243 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 12:11:55.417187 2889418 status.go:255] checking status of ha-799243-m02 ...
	I0805 12:11:55.417581 2889418 cli_runner.go:164] Run: docker container inspect ha-799243-m02 --format={{.State.Status}}
	I0805 12:11:55.435321 2889418 status.go:330] ha-799243-m02 host status = "Stopped" (err=<nil>)
	I0805 12:11:55.435343 2889418 status.go:343] host is not running, skipping remaining checks
	I0805 12:11:55.435352 2889418 status.go:257] ha-799243-m02 status: &{Name:ha-799243-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 12:11:55.435374 2889418 status.go:255] checking status of ha-799243-m04 ...
	I0805 12:11:55.435716 2889418 cli_runner.go:164] Run: docker container inspect ha-799243-m04 --format={{.State.Status}}
	I0805 12:11:55.461007 2889418 status.go:330] ha-799243-m04 host status = "Stopped" (err=<nil>)
	I0805 12:11:55.461026 2889418 status.go:343] host is not running, skipping remaining checks
	I0805 12:11:55.461034 2889418 status.go:257] ha-799243-m04 status: &{Name:ha-799243-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (105.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-799243 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0805 12:12:19.547243 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/functional-644345/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-799243 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m44.332492982s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (105.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (42.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-799243 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-799243 --control-plane -v=7 --alsologtostderr: (41.805395743s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-799243 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-799243 status -v=7 --alsologtostderr: (1.066522138s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (42.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.88s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (32.46s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -p image-621210 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -p image-621210 --driver=docker  --container-runtime=docker: (32.457784034s)
--- PASS: TestImageBuild/serial/Setup (32.46s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.83s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-621210
image_test.go:78: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-621210: (1.825147286s)
--- PASS: TestImageBuild/serial/NormalBuild (1.83s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.89s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-621210
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.89s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.7s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-621210
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.70s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.71s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-621210
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.71s)

                                                
                                    
x
+
TestJSONOutput/start/Command (56.8s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-693935 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
E0805 12:15:25.194836 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/addons-245337/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-693935 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (56.79511332s)
--- PASS: TestJSONOutput/start/Command (56.80s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-693935 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.54s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-693935 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.54s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.81s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-693935 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-693935 --output=json --user=testUser: (5.812432792s)
--- PASS: TestJSONOutput/stop/Command (5.81s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-979823 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-979823 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (80.528562ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1295560e-edce-450e-a75b-94d9d0fa72d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-979823] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3e5412f5-71a8-4e29-b027-8753fdffe1b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19377"}}
	{"specversion":"1.0","id":"f450714c-a41b-4881-bf24-15b5a526cb18","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d29c12ef-3f8c-4fc8-9915-2d01855f632a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19377-2789855/kubeconfig"}}
	{"specversion":"1.0","id":"e23fc72a-26d1-4dc2-a7d5-67635c3fda42","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19377-2789855/.minikube"}}
	{"specversion":"1.0","id":"0b907e76-085e-4519-bfb2-f5803929250a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"c5a36512-64d9-4540-9bcb-4a618d5e85e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ee1a93bc-b645-4276-8294-8f13653ca2b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-979823" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-979823
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (35.64s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-994959 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-994959 --network=: (33.403961866s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-994959" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-994959
E0805 12:16:51.860375 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/functional-644345/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-994959: (2.214985426s)
--- PASS: TestKicCustomNetwork/create_custom_network (35.64s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (34.88s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-635472 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-635472 --network=bridge: (32.790254122s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-635472" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-635472
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-635472: (2.071252493s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (34.88s)

                                                
                                    
x
+
TestKicExistingNetwork (34.55s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-944923 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-944923 --network=existing-network: (32.383783532s)
helpers_test.go:175: Cleaning up "existing-network-944923" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-944923
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-944923: (2.001634597s)
--- PASS: TestKicExistingNetwork (34.55s)

                                                
                                    
x
+
TestKicCustomSubnet (35.39s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-242609 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-242609 --subnet=192.168.60.0/24: (33.272080366s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-242609 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-242609" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-242609
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-242609: (2.09726129s)
--- PASS: TestKicCustomSubnet (35.39s)

                                                
                                    
x
+
TestKicStaticIP (32.37s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-100848 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-100848 --static-ip=192.168.200.200: (30.126991112s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-100848 ip
helpers_test.go:175: Cleaning up "static-ip-100848" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-100848
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-100848: (2.079417702s)
--- PASS: TestKicStaticIP (32.37s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (69.45s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-421031 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-421031 --driver=docker  --container-runtime=docker: (30.56387097s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-423624 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-423624 --driver=docker  --container-runtime=docker: (33.471821426s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-421031
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-423624
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-423624" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-423624
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-423624: (2.102107545s)
helpers_test.go:175: Cleaning up "first-421031" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-421031
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-421031: (2.096059101s)
--- PASS: TestMinikubeProfile (69.45s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.24s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-265492 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
E0805 12:20:25.194883 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/addons-245337/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-265492 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (7.239948464s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.24s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-265492 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.64s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-278588 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-278588 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (7.634529195s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-278588 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.46s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-265492 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-265492 --alsologtostderr -v=5: (1.456630236s)
--- PASS: TestMountStart/serial/DeleteFirst (1.46s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-278588 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-278588
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-278588: (1.209276893s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.36s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-278588
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-278588: (7.358296805s)
--- PASS: TestMountStart/serial/RestartStopped (8.36s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-278588 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (77.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-537934 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0805 12:21:51.860004 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/functional-644345/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-537934 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m16.998199831s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-537934 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (77.63s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (36.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-537934 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-537934 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-537934 -- rollout status deployment/busybox: (2.906631474s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-537934 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-537934 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-537934 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-537934 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-537934 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-537934 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-537934 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-537934 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-537934 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-537934 -- exec busybox-fc5497c4f-2f248 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-537934 -- exec busybox-fc5497c4f-l5f77 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-537934 -- exec busybox-fc5497c4f-2f248 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-537934 -- exec busybox-fc5497c4f-l5f77 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-537934 -- exec busybox-fc5497c4f-2f248 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-537934 -- exec busybox-fc5497c4f-l5f77 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (36.99s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-537934 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-537934 -- exec busybox-fc5497c4f-2f248 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-537934 -- exec busybox-fc5497c4f-2f248 -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-537934 -- exec busybox-fc5497c4f-l5f77 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-537934 -- exec busybox-fc5497c4f-l5f77 -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.02s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (19.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-537934 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-537934 -v 3 --alsologtostderr: (18.364636628s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-537934 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (19.17s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-537934 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-537934 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-537934 cp testdata/cp-test.txt multinode-537934:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-537934 ssh -n multinode-537934 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-537934 cp multinode-537934:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2924311542/001/cp-test_multinode-537934.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-537934 ssh -n multinode-537934 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-537934 cp multinode-537934:/home/docker/cp-test.txt multinode-537934-m02:/home/docker/cp-test_multinode-537934_multinode-537934-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-537934 ssh -n multinode-537934 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-537934 ssh -n multinode-537934-m02 "sudo cat /home/docker/cp-test_multinode-537934_multinode-537934-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-537934 cp multinode-537934:/home/docker/cp-test.txt multinode-537934-m03:/home/docker/cp-test_multinode-537934_multinode-537934-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-537934 ssh -n multinode-537934 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-537934 ssh -n multinode-537934-m03 "sudo cat /home/docker/cp-test_multinode-537934_multinode-537934-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-537934 cp testdata/cp-test.txt multinode-537934-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-537934 ssh -n multinode-537934-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-537934 cp multinode-537934-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2924311542/001/cp-test_multinode-537934-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-537934 ssh -n multinode-537934-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-537934 cp multinode-537934-m02:/home/docker/cp-test.txt multinode-537934:/home/docker/cp-test_multinode-537934-m02_multinode-537934.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-537934 ssh -n multinode-537934-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-537934 ssh -n multinode-537934 "sudo cat /home/docker/cp-test_multinode-537934-m02_multinode-537934.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-537934 cp multinode-537934-m02:/home/docker/cp-test.txt multinode-537934-m03:/home/docker/cp-test_multinode-537934-m02_multinode-537934-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-537934 ssh -n multinode-537934-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-537934 ssh -n multinode-537934-m03 "sudo cat /home/docker/cp-test_multinode-537934-m02_multinode-537934-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-537934 cp testdata/cp-test.txt multinode-537934-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-537934 ssh -n multinode-537934-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-537934 cp multinode-537934-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2924311542/001/cp-test_multinode-537934-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-537934 ssh -n multinode-537934-m03 "sudo cat /home/docker/cp-test.txt"
E0805 12:23:14.907547 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/functional-644345/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-537934 cp multinode-537934-m03:/home/docker/cp-test.txt multinode-537934:/home/docker/cp-test_multinode-537934-m03_multinode-537934.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-537934 ssh -n multinode-537934-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-537934 ssh -n multinode-537934 "sudo cat /home/docker/cp-test_multinode-537934-m03_multinode-537934.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-537934 cp multinode-537934-m03:/home/docker/cp-test.txt multinode-537934-m02:/home/docker/cp-test_multinode-537934-m03_multinode-537934-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-537934 ssh -n multinode-537934-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-537934 ssh -n multinode-537934-m02 "sudo cat /home/docker/cp-test_multinode-537934-m03_multinode-537934-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.47s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-537934 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-537934 node stop m03: (1.226400693s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-537934 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-537934 status: exit status 7 (532.240718ms)

                                                
                                                
-- stdout --
	multinode-537934
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-537934-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-537934-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-537934 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-537934 status --alsologtostderr: exit status 7 (523.688597ms)

                                                
                                                
-- stdout --
	multinode-537934
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-537934-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-537934-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 12:23:18.863653 2963928 out.go:291] Setting OutFile to fd 1 ...
	I0805 12:23:18.863851 2963928 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 12:23:18.863909 2963928 out.go:304] Setting ErrFile to fd 2...
	I0805 12:23:18.863930 2963928 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 12:23:18.864190 2963928 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-2789855/.minikube/bin
	I0805 12:23:18.864474 2963928 out.go:298] Setting JSON to false
	I0805 12:23:18.864557 2963928 mustload.go:65] Loading cluster: multinode-537934
	I0805 12:23:18.864699 2963928 notify.go:220] Checking for updates...
	I0805 12:23:18.864993 2963928 config.go:182] Loaded profile config "multinode-537934": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 12:23:18.865006 2963928 status.go:255] checking status of multinode-537934 ...
	I0805 12:23:18.865475 2963928 cli_runner.go:164] Run: docker container inspect multinode-537934 --format={{.State.Status}}
	I0805 12:23:18.884737 2963928 status.go:330] multinode-537934 host status = "Running" (err=<nil>)
	I0805 12:23:18.884777 2963928 host.go:66] Checking if "multinode-537934" exists ...
	I0805 12:23:18.885150 2963928 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-537934
	I0805 12:23:18.918150 2963928 host.go:66] Checking if "multinode-537934" exists ...
	I0805 12:23:18.918535 2963928 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 12:23:18.918613 2963928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-537934
	I0805 12:23:18.937876 2963928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36573 SSHKeyPath:/home/jenkins/minikube-integration/19377-2789855/.minikube/machines/multinode-537934/id_rsa Username:docker}
	I0805 12:23:19.033239 2963928 ssh_runner.go:195] Run: systemctl --version
	I0805 12:23:19.038342 2963928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 12:23:19.051866 2963928 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0805 12:23:19.113874 2963928 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-08-05 12:23:19.103189959 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0805 12:23:19.114540 2963928 kubeconfig.go:125] found "multinode-537934" server: "https://192.168.58.2:8443"
	I0805 12:23:19.114572 2963928 api_server.go:166] Checking apiserver status ...
	I0805 12:23:19.114626 2963928 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 12:23:19.127182 2963928 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2236/cgroup
	I0805 12:23:19.136648 2963928 api_server.go:182] apiserver freezer: "11:freezer:/docker/029b80df5d1d65d9cbf055f0eec2c692399b6122200738d494461fbfc91ad77f/kubepods/burstable/pod4468e869465ea33ce4977260c3ee8853/a34d8cd5a907aa107bc9d5e80921fd1f223e5a09eddbf92b96e0c84d744ac25c"
	I0805 12:23:19.136728 2963928 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/029b80df5d1d65d9cbf055f0eec2c692399b6122200738d494461fbfc91ad77f/kubepods/burstable/pod4468e869465ea33ce4977260c3ee8853/a34d8cd5a907aa107bc9d5e80921fd1f223e5a09eddbf92b96e0c84d744ac25c/freezer.state
	I0805 12:23:19.145590 2963928 api_server.go:204] freezer state: "THAWED"
	I0805 12:23:19.145617 2963928 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0805 12:23:19.153733 2963928 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0805 12:23:19.153762 2963928 status.go:422] multinode-537934 apiserver status = Running (err=<nil>)
	I0805 12:23:19.153774 2963928 status.go:257] multinode-537934 status: &{Name:multinode-537934 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 12:23:19.153792 2963928 status.go:255] checking status of multinode-537934-m02 ...
	I0805 12:23:19.154116 2963928 cli_runner.go:164] Run: docker container inspect multinode-537934-m02 --format={{.State.Status}}
	I0805 12:23:19.171338 2963928 status.go:330] multinode-537934-m02 host status = "Running" (err=<nil>)
	I0805 12:23:19.171369 2963928 host.go:66] Checking if "multinode-537934-m02" exists ...
	I0805 12:23:19.171687 2963928 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-537934-m02
	I0805 12:23:19.187729 2963928 host.go:66] Checking if "multinode-537934-m02" exists ...
	I0805 12:23:19.188068 2963928 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 12:23:19.188115 2963928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-537934-m02
	I0805 12:23:19.204946 2963928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36578 SSHKeyPath:/home/jenkins/minikube-integration/19377-2789855/.minikube/machines/multinode-537934-m02/id_rsa Username:docker}
	I0805 12:23:19.301206 2963928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 12:23:19.312567 2963928 status.go:257] multinode-537934-m02 status: &{Name:multinode-537934-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0805 12:23:19.312603 2963928 status.go:255] checking status of multinode-537934-m03 ...
	I0805 12:23:19.312900 2963928 cli_runner.go:164] Run: docker container inspect multinode-537934-m03 --format={{.State.Status}}
	I0805 12:23:19.329256 2963928 status.go:330] multinode-537934-m03 host status = "Stopped" (err=<nil>)
	I0805 12:23:19.329280 2963928 status.go:343] host is not running, skipping remaining checks
	I0805 12:23:19.329289 2963928 status.go:257] multinode-537934-m03 status: &{Name:multinode-537934-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.28s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (11.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-537934 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-537934 node start m03 -v=7 --alsologtostderr: (10.763156518s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-537934 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (11.56s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (69.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-537934
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-537934
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-537934: (22.707564499s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-537934 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-537934 --wait=true -v=8 --alsologtostderr: (46.867263139s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-537934
--- PASS: TestMultiNode/serial/RestartKeepsNodes (69.72s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-537934 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-537934 node delete m03: (5.099420772s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-537934 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.79s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-537934 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-537934 stop: (21.399202244s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-537934 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-537934 status: exit status 7 (103.053152ms)

                                                
                                                
-- stdout --
	multinode-537934
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-537934-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-537934 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-537934 status --alsologtostderr: exit status 7 (105.160058ms)

                                                
                                                
-- stdout --
	multinode-537934
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-537934-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 12:25:07.955011 2976898 out.go:291] Setting OutFile to fd 1 ...
	I0805 12:25:07.955207 2976898 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 12:25:07.955219 2976898 out.go:304] Setting ErrFile to fd 2...
	I0805 12:25:07.955225 2976898 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 12:25:07.955454 2976898 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-2789855/.minikube/bin
	I0805 12:25:07.955641 2976898 out.go:298] Setting JSON to false
	I0805 12:25:07.955682 2976898 mustload.go:65] Loading cluster: multinode-537934
	I0805 12:25:07.955752 2976898 notify.go:220] Checking for updates...
	I0805 12:25:07.956962 2976898 config.go:182] Loaded profile config "multinode-537934": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 12:25:07.957022 2976898 status.go:255] checking status of multinode-537934 ...
	I0805 12:25:07.957726 2976898 cli_runner.go:164] Run: docker container inspect multinode-537934 --format={{.State.Status}}
	I0805 12:25:07.975649 2976898 status.go:330] multinode-537934 host status = "Stopped" (err=<nil>)
	I0805 12:25:07.975674 2976898 status.go:343] host is not running, skipping remaining checks
	I0805 12:25:07.975681 2976898 status.go:257] multinode-537934 status: &{Name:multinode-537934 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 12:25:07.975705 2976898 status.go:255] checking status of multinode-537934-m02 ...
	I0805 12:25:07.976037 2976898 cli_runner.go:164] Run: docker container inspect multinode-537934-m02 --format={{.State.Status}}
	I0805 12:25:08.012967 2976898 status.go:330] multinode-537934-m02 host status = "Stopped" (err=<nil>)
	I0805 12:25:08.012994 2976898 status.go:343] host is not running, skipping remaining checks
	I0805 12:25:08.013002 2976898 status.go:257] multinode-537934-m02 status: &{Name:multinode-537934-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.61s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (59.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-537934 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0805 12:25:25.194140 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/addons-245337/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-537934 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (58.930679514s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-537934 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (59.62s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-537934
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-537934-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-537934-m02 --driver=docker  --container-runtime=docker: exit status 14 (90.310643ms)

                                                
                                                
-- stdout --
	* [multinode-537934-m02] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19377-2789855/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19377-2789855/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-537934-m02' is duplicated with machine name 'multinode-537934-m02' in profile 'multinode-537934'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-537934-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-537934-m03 --driver=docker  --container-runtime=docker: (31.434179404s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-537934
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-537934: exit status 80 (323.048866ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-537934 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-537934-m03 already exists in multinode-537934-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_3.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-537934-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-537934-m03: (2.104775744s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.00s)

                                                
                                    
x
+
TestPreload (148.45s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-462585 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
E0805 12:26:51.860390 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/functional-644345/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-462585 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (1m41.414376772s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-462585 image pull gcr.io/k8s-minikube/busybox
E0805 12:28:28.240602 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/addons-245337/client.crt: no such file or directory
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-462585 image pull gcr.io/k8s-minikube/busybox: (1.361082823s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-462585
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-462585: (10.835946769s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-462585 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-462585 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (32.457123523s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-462585 image list
helpers_test.go:175: Cleaning up "test-preload-462585" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-462585
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-462585: (2.152334542s)
--- PASS: TestPreload (148.45s)

                                                
                                    
x
+
TestScheduledStopUnix (106.35s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-019691 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-019691 --memory=2048 --driver=docker  --container-runtime=docker: (32.926268591s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-019691 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-019691 -n scheduled-stop-019691
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-019691 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-019691 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-019691 -n scheduled-stop-019691
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-019691
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-019691 --schedule 15s
E0805 12:30:25.194469 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/addons-245337/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-019691
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-019691: exit status 7 (72.845237ms)

                                                
                                                
-- stdout --
	scheduled-stop-019691
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-019691 -n scheduled-stop-019691
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-019691 -n scheduled-stop-019691: exit status 7 (67.311548ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-019691" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-019691
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-019691: (1.921170791s)
--- PASS: TestScheduledStopUnix (106.35s)

                                                
                                    
x
+
TestSkaffold (120.05s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe1826672498 version
skaffold_test.go:63: skaffold version: v2.13.1
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p skaffold-352302 --memory=2600 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p skaffold-352302 --memory=2600 --driver=docker  --container-runtime=docker: (34.057551s)
skaffold_test.go:86: copying out/minikube-linux-arm64 to /home/jenkins/workspace/Docker_Linux_docker_arm64/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe1826672498 run --minikube-profile skaffold-352302 --kube-context skaffold-352302 --status-check=true --port-forward=false --interactive=false
E0805 12:31:51.860703 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/functional-644345/client.crt: no such file or directory
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe1826672498 run --minikube-profile skaffold-352302 --kube-context skaffold-352302 --status-check=true --port-forward=false --interactive=false: (1m10.845400878s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-7f8d5f7584-8fqmf" [79ff8145-a7ac-4096-a96c-1fe4f79b6ee4] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.004125613s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-6d659bd9bb-rl7k6" [9b3a54bf-4dc7-4503-8540-9783fd5968af] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.004448852s
helpers_test.go:175: Cleaning up "skaffold-352302" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p skaffold-352302
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p skaffold-352302: (3.001898236s)
--- PASS: TestSkaffold (120.05s)

                                                
                                    
x
+
TestInsufficientStorage (11.47s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-393931 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-393931 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (9.168301609s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a2bc2fcf-b1cf-4a07-84a3-6e3c65a1329d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-393931] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f98e991c-9013-48ab-97c8-1e408711d38d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19377"}}
	{"specversion":"1.0","id":"87450124-7f92-45ad-a142-65eefc8cb2a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d4264199-47c4-4217-9d5a-fdb925c3f9c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19377-2789855/kubeconfig"}}
	{"specversion":"1.0","id":"4eb9bd5d-7863-409c-9b69-46a83944fc34","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19377-2789855/.minikube"}}
	{"specversion":"1.0","id":"fd42e569-4a25-48bd-893a-204a107031c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"66e20b28-1e73-4b85-a756-03ee369d8304","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ea90f801-5ac5-4ddb-ba1a-a2a63cb3c982","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"c07060bf-dbee-4c56-9732-bb36b9cdb2eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"91261a66-aa10-44fe-b6f6-8d3a55f1f799","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"cbd58d19-5c43-4709-ab53-e67cef019ea0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"bd59f1ff-505e-4454-8971-b1e075d082eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-393931\" primary control-plane node in \"insufficient-storage-393931\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"82ff15af-f82d-41a8-8fdf-daa32df49f51","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1721902582-19326 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"4c4563a7-88e9-4089-a7d5-378fbeca5182","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"c8dba049-e6bd-46ea-aaf3-bce250485c60","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-393931 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-393931 --output=json --layout=cluster: exit status 7 (297.726469ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-393931","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-393931","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0805 12:33:10.073832 3011407 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-393931" does not appear in /home/jenkins/minikube-integration/19377-2789855/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-393931 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-393931 --output=json --layout=cluster: exit status 7 (285.304476ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-393931","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-393931","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0805 12:33:10.359063 3011469 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-393931" does not appear in /home/jenkins/minikube-integration/19377-2789855/kubeconfig
	E0805 12:33:10.369317 3011469 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/insufficient-storage-393931/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-393931" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-393931
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-393931: (1.714038564s)
--- PASS: TestInsufficientStorage (11.47s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (107.93s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3015645161 start -p running-upgrade-193796 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0805 12:43:14.279450 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/skaffold-352302/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3015645161 start -p running-upgrade-193796 --memory=2200 --vm-driver=docker  --container-runtime=docker: (52.131166386s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-193796 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-193796 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (52.774785507s)
helpers_test.go:175: Cleaning up "running-upgrade-193796" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-193796
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-193796: (2.28231976s)
--- PASS: TestRunningBinaryUpgrade (107.93s)

                                                
                                    
x
+
TestKubernetesUpgrade (378.98s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-719325 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0805 12:37:46.595053 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/skaffold-352302/client.crt: no such file or directory
E0805 12:37:46.601201 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/skaffold-352302/client.crt: no such file or directory
E0805 12:37:46.611469 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/skaffold-352302/client.crt: no such file or directory
E0805 12:37:46.631720 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/skaffold-352302/client.crt: no such file or directory
E0805 12:37:46.671997 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/skaffold-352302/client.crt: no such file or directory
E0805 12:37:46.752314 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/skaffold-352302/client.crt: no such file or directory
E0805 12:37:46.912678 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/skaffold-352302/client.crt: no such file or directory
E0805 12:37:47.233224 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/skaffold-352302/client.crt: no such file or directory
E0805 12:37:47.873818 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/skaffold-352302/client.crt: no such file or directory
E0805 12:37:49.154416 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/skaffold-352302/client.crt: no such file or directory
E0805 12:37:51.715298 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/skaffold-352302/client.crt: no such file or directory
E0805 12:37:56.835817 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/skaffold-352302/client.crt: no such file or directory
E0805 12:38:07.076918 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/skaffold-352302/client.crt: no such file or directory
E0805 12:38:27.557249 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/skaffold-352302/client.crt: no such file or directory
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-719325 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (52.951839388s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-719325
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-719325: (10.741236044s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-719325 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-719325 status --format={{.Host}}: exit status 7 (70.782062ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-719325 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0805 12:39:08.518139 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/skaffold-352302/client.crt: no such file or directory
E0805 12:39:54.907817 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/functional-644345/client.crt: no such file or directory
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-719325 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m42.477361033s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-719325 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-719325 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-719325 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker: exit status 106 (132.470462ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-719325] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19377-2789855/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19377-2789855/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-rc.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-719325
	    minikube start -p kubernetes-upgrade-719325 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7193252 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-719325 --kubernetes-version=v1.31.0-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-719325 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-719325 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (30.04936519s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-719325" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-719325
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-719325: (2.454587074s)
--- PASS: TestKubernetesUpgrade (378.98s)

                                                
                                    
x
+
TestMissingContainerUpgrade (144.76s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3451050298 start -p missing-upgrade-346168 --memory=2200 --driver=docker  --container-runtime=docker
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3451050298 start -p missing-upgrade-346168 --memory=2200 --driver=docker  --container-runtime=docker: (1m12.940677719s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-346168
E0805 12:41:51.860355 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/functional-644345/client.crt: no such file or directory
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-346168: (10.473100657s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-346168
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-346168 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0805 12:42:46.596105 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/skaffold-352302/client.crt: no such file or directory
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-346168 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (58.40299254s)
helpers_test.go:175: Cleaning up "missing-upgrade-346168" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-346168
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-346168: (2.20775389s)
--- PASS: TestMissingContainerUpgrade (144.76s)

                                                
                                    
x
+
TestPause/serial/Start (58.04s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-945405 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-945405 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (58.040409052s)
--- PASS: TestPause/serial/Start (58.04s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (29.68s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-945405 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-945405 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (29.667343296s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (29.68s)

                                                
                                    
x
+
TestPause/serial/Pause (0.64s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-945405 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.64s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.33s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-945405 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-945405 --output=json --layout=cluster: exit status 2 (334.420884ms)

                                                
                                                
-- stdout --
	{"Name":"pause-945405","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-945405","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.33s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.54s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-945405 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.54s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.88s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-945405 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.88s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.2s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-945405 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-945405 --alsologtostderr -v=5: (2.202785712s)
--- PASS: TestPause/serial/DeletePaused (2.20s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.38s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-945405
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-945405: exit status 1 (17.099582ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-945405: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-762405 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-762405 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (103.640321ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-762405] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19377-2789855/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19377-2789855/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (42.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-762405 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-762405 --driver=docker  --container-runtime=docker: (41.9446477s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-762405 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (42.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (21.67s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-762405 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-762405 --no-kubernetes --driver=docker  --container-runtime=docker: (18.940952801s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-762405 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-762405 status -o json: exit status 2 (600.038561ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-762405","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-762405
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-762405: (2.12842804s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (21.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-762405 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-762405 --no-kubernetes --driver=docker  --container-runtime=docker: (9.041698423s)
--- PASS: TestNoKubernetes/serial/Start (9.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-762405 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-762405 "sudo systemctl is-active --quiet service kubelet": exit status 1 (321.76484ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-762405
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-762405: (1.285927011s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-762405 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-762405 --driver=docker  --container-runtime=docker: (8.18778422s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-762405 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-762405 "sudo systemctl is-active --quiet service kubelet": exit status 1 (445.279819ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.45s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.66s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.66s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (96.01s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3020481186 start -p stopped-upgrade-456025 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3020481186 start -p stopped-upgrade-456025 --memory=2200 --vm-driver=docker  --container-runtime=docker: (55.442467228s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3020481186 -p stopped-upgrade-456025 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3020481186 -p stopped-upgrade-456025 stop: (2.124080987s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-456025 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0805 12:45:08.241513 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/addons-245337/client.crt: no such file or directory
E0805 12:45:25.194560 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/addons-245337/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-456025 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (38.439250939s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (96.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (59.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-556491 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-556491 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (59.205326145s)
--- PASS: TestNetworkPlugins/group/auto/Start (59.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.58s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-456025
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-456025: (1.579653512s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-556491 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-556491 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (1m27.002147318s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (87.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-556491 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-556491 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-579wg" [ace1606b-f72a-4283-9233-acbcb655fcf8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-579wg" [ace1606b-f72a-4283-9233-acbcb655fcf8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.004527998s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-556491 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-556491 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-556491 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (82.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-556491 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
E0805 12:46:51.860726 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/functional-644345/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-556491 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m22.436909023s)
--- PASS: TestNetworkPlugins/group/calico/Start (82.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-85tcn" [2b8ebfd9-0992-4a3c-928b-fef40f2f28ee] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.007338425s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-556491 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-556491 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-fzgzd" [ab255654-8d7a-4a17-b7e2-47abd46c3863] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-fzgzd" [ab255654-8d7a-4a17-b7e2-47abd46c3863] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.006025865s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-556491 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-556491 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-556491 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-w6xd8" [1a975fc9-3175-4886-ab2b-2ec262c02267] Running
E0805 12:47:46.596329 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/skaffold-352302/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.008386334s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (73.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-556491 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-556491 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (1m13.748474832s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (73.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-556491 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (15.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-556491 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-q5gbx" [abeac076-abca-435f-91f4-33e636a4bc15] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-q5gbx" [abeac076-abca-435f-91f4-33e636a4bc15] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 15.004309448s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (15.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-556491 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-556491 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-556491 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (58.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p false-556491 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p false-556491 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (58.644086224s)
--- PASS: TestNetworkPlugins/group/false/Start (58.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-556491 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-556491 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-nfpsh" [fb487bcb-7ea9-43f9-b02f-9d7691a1bbb8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-nfpsh" [fb487bcb-7ea9-43f9-b02f-9d7691a1bbb8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.003967335s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-556491 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-556491 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-556491 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p false-556491 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (12.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-556491 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-vw6hm" [ade28bea-1f01-4d27-b0fd-def310f22861] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-vw6hm" [ade28bea-1f01-4d27-b0fd-def310f22861] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 12.003137283s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (12.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (94.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-556491 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-556491 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (1m34.17999392s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (94.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-556491 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-556491 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-556491 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (68.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-556491 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
E0805 12:50:25.194175 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/addons-245337/client.crt: no such file or directory
E0805 12:50:43.789883 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/auto-556491/client.crt: no such file or directory
E0805 12:50:43.795083 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/auto-556491/client.crt: no such file or directory
E0805 12:50:43.805280 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/auto-556491/client.crt: no such file or directory
E0805 12:50:43.825510 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/auto-556491/client.crt: no such file or directory
E0805 12:50:43.865741 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/auto-556491/client.crt: no such file or directory
E0805 12:50:43.945983 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/auto-556491/client.crt: no such file or directory
E0805 12:50:44.106429 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/auto-556491/client.crt: no such file or directory
E0805 12:50:44.426892 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/auto-556491/client.crt: no such file or directory
E0805 12:50:45.067762 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/auto-556491/client.crt: no such file or directory
E0805 12:50:46.348580 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/auto-556491/client.crt: no such file or directory
E0805 12:50:48.908779 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/auto-556491/client.crt: no such file or directory
E0805 12:50:54.029321 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/auto-556491/client.crt: no such file or directory
E0805 12:51:04.270148 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/auto-556491/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-556491 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (1m8.408633198s)
--- PASS: TestNetworkPlugins/group/flannel/Start (68.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-556491 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-556491 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-rqmsc" [33f83149-2786-4614-a813-d0d769aa2c11] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-rqmsc" [33f83149-2786-4614-a813-d0d769aa2c11] Running
E0805 12:51:24.750400 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/auto-556491/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.0040704s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-jm9km" [6509b02f-e23b-4e32-b603-6fbb0594d8e3] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004637927s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-556491 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-556491 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-7s44g" [371190a6-611b-4667-bca8-db99de5358cc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-7s44g" [371190a6-611b-4667-bca8-db99de5358cc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.006613453s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-556491 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-556491 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-556491 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-556491 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-556491 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-556491 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (58.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-556491 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
E0805 12:51:51.861890 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/functional-644345/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-556491 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (58.780401058s)
--- PASS: TestNetworkPlugins/group/bridge/Start (58.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (54.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kubenet-556491 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
E0805 12:52:05.711271 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/auto-556491/client.crt: no such file or directory
E0805 12:52:06.897970 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/kindnet-556491/client.crt: no such file or directory
E0805 12:52:06.903215 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/kindnet-556491/client.crt: no such file or directory
E0805 12:52:06.914311 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/kindnet-556491/client.crt: no such file or directory
E0805 12:52:06.935139 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/kindnet-556491/client.crt: no such file or directory
E0805 12:52:06.975673 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/kindnet-556491/client.crt: no such file or directory
E0805 12:52:07.055968 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/kindnet-556491/client.crt: no such file or directory
E0805 12:52:07.216381 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/kindnet-556491/client.crt: no such file or directory
E0805 12:52:07.536731 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/kindnet-556491/client.crt: no such file or directory
E0805 12:52:08.177176 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/kindnet-556491/client.crt: no such file or directory
E0805 12:52:09.458015 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/kindnet-556491/client.crt: no such file or directory
E0805 12:52:12.018744 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/kindnet-556491/client.crt: no such file or directory
E0805 12:52:17.139401 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/kindnet-556491/client.crt: no such file or directory
E0805 12:52:27.379654 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/kindnet-556491/client.crt: no such file or directory
E0805 12:52:44.029335 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/calico-556491/client.crt: no such file or directory
E0805 12:52:44.034689 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/calico-556491/client.crt: no such file or directory
E0805 12:52:44.044876 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/calico-556491/client.crt: no such file or directory
E0805 12:52:44.065126 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/calico-556491/client.crt: no such file or directory
E0805 12:52:44.105371 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/calico-556491/client.crt: no such file or directory
E0805 12:52:44.185664 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/calico-556491/client.crt: no such file or directory
E0805 12:52:44.345975 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/calico-556491/client.crt: no such file or directory
E0805 12:52:44.666324 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/calico-556491/client.crt: no such file or directory
E0805 12:52:45.307232 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/calico-556491/client.crt: no such file or directory
E0805 12:52:46.588235 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/calico-556491/client.crt: no such file or directory
E0805 12:52:46.595588 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/skaffold-352302/client.crt: no such file or directory
E0805 12:52:47.860261 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/kindnet-556491/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kubenet-556491 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (54.056035934s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (54.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-556491 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-556491 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-dqg95" [9fa04261-10cc-4234-bb94-1adc01c027c2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0805 12:52:49.149095 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/calico-556491/client.crt: no such file or directory
E0805 12:52:54.269438 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/calico-556491/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-dqg95" [9fa04261-10cc-4234-bb94-1adc01c027c2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.052681005s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kubenet-556491 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-556491 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-sfjd4" [c85872fc-06d4-473b-b9a9-daf989545760] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-sfjd4" [c85872fc-06d4-473b-b9a9-daf989545760] Running
E0805 12:53:04.509862 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/calico-556491/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.004979867s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-556491 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-556491 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-556491 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-556491 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-556491 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-556491 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.24s)
E0805 13:06:03.171347 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/old-k8s-version-732633/client.crt: no such file or directory
E0805 13:06:16.112176 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/enable-default-cni-556491/client.crt: no such file or directory
E0805 13:06:18.939232 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/flannel-556491/client.crt: no such file or directory
E0805 13:06:27.825366 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/no-preload-688080/client.crt: no such file or directory
E0805 13:06:30.856382 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/old-k8s-version-732633/client.crt: no such file or directory
E0805 13:06:51.860176 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/functional-644345/client.crt: no such file or directory
E0805 13:07:06.833268 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/auto-556491/client.crt: no such file or directory
E0805 13:07:06.897552 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/kindnet-556491/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (158.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-732633 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0805 12:53:24.990274 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/calico-556491/client.crt: no such file or directory
E0805 12:53:27.631838 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/auto-556491/client.crt: no such file or directory
E0805 12:53:28.821290 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/kindnet-556491/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-732633 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m38.006642601s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (158.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (91.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-688080 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0-rc.0
E0805 12:54:03.772467 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/custom-flannel-556491/client.crt: no such file or directory
E0805 12:54:03.777704 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/custom-flannel-556491/client.crt: no such file or directory
E0805 12:54:03.787943 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/custom-flannel-556491/client.crt: no such file or directory
E0805 12:54:03.808047 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/custom-flannel-556491/client.crt: no such file or directory
E0805 12:54:03.848290 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/custom-flannel-556491/client.crt: no such file or directory
E0805 12:54:03.928464 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/custom-flannel-556491/client.crt: no such file or directory
E0805 12:54:04.089281 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/custom-flannel-556491/client.crt: no such file or directory
E0805 12:54:04.409535 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/custom-flannel-556491/client.crt: no such file or directory
E0805 12:54:05.050125 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/custom-flannel-556491/client.crt: no such file or directory
E0805 12:54:05.950914 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/calico-556491/client.crt: no such file or directory
E0805 12:54:06.330608 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/custom-flannel-556491/client.crt: no such file or directory
E0805 12:54:08.890808 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/custom-flannel-556491/client.crt: no such file or directory
E0805 12:54:09.639648 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/skaffold-352302/client.crt: no such file or directory
E0805 12:54:14.011996 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/custom-flannel-556491/client.crt: no such file or directory
E0805 12:54:24.252253 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/custom-flannel-556491/client.crt: no such file or directory
E0805 12:54:32.665441 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/false-556491/client.crt: no such file or directory
E0805 12:54:32.670752 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/false-556491/client.crt: no such file or directory
E0805 12:54:32.681064 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/false-556491/client.crt: no such file or directory
E0805 12:54:32.701363 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/false-556491/client.crt: no such file or directory
E0805 12:54:32.741686 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/false-556491/client.crt: no such file or directory
E0805 12:54:32.821971 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/false-556491/client.crt: no such file or directory
E0805 12:54:32.983065 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/false-556491/client.crt: no such file or directory
E0805 12:54:33.303634 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/false-556491/client.crt: no such file or directory
E0805 12:54:33.944564 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/false-556491/client.crt: no such file or directory
E0805 12:54:35.225591 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/false-556491/client.crt: no such file or directory
E0805 12:54:37.786300 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/false-556491/client.crt: no such file or directory
E0805 12:54:42.907073 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/false-556491/client.crt: no such file or directory
E0805 12:54:44.732480 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/custom-flannel-556491/client.crt: no such file or directory
E0805 12:54:50.742240 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/kindnet-556491/client.crt: no such file or directory
E0805 12:54:53.148074 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/false-556491/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-688080 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0-rc.0: (1m31.486364031s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (91.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-688080 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0e18a603-99fc-4b2c-bb31-ba10a28ba71a] Pending
helpers_test.go:344: "busybox" [0e18a603-99fc-4b2c-bb31-ba10a28ba71a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0e18a603-99fc-4b2c-bb31-ba10a28ba71a] Running
E0805 12:55:13.628409 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/false-556491/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004430174s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-688080 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-688080 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-688080 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.04534753s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-688080 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-688080 --alsologtostderr -v=3
E0805 12:55:25.195034 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/addons-245337/client.crt: no such file or directory
E0805 12:55:25.693382 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/custom-flannel-556491/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-688080 --alsologtostderr -v=3: (11.029002628s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-688080 -n no-preload-688080
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-688080 -n no-preload-688080: exit status 7 (72.700019ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-688080 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (292.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-688080 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0-rc.0
E0805 12:55:27.871371 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/calico-556491/client.crt: no such file or directory
E0805 12:55:43.789848 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/auto-556491/client.crt: no such file or directory
E0805 12:55:54.589117 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/false-556491/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-688080 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0-rc.0: (4m52.163972433s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-688080 -n no-preload-688080
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (292.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-732633 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5dbeeee4-a1ac-48e7-bf47-09b5dcf571c7] Pending
helpers_test.go:344: "busybox" [5dbeeee4-a1ac-48e7-bf47-09b5dcf571c7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5dbeeee4-a1ac-48e7-bf47-09b5dcf571c7] Running
E0805 12:56:11.472172 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/auto-556491/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003691252s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-732633 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-732633 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-732633 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.046419244s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-732633 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-732633 --alsologtostderr -v=3
E0805 12:56:16.112723 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/enable-default-cni-556491/client.crt: no such file or directory
E0805 12:56:16.118033 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/enable-default-cni-556491/client.crt: no such file or directory
E0805 12:56:16.128395 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/enable-default-cni-556491/client.crt: no such file or directory
E0805 12:56:16.148732 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/enable-default-cni-556491/client.crt: no such file or directory
E0805 12:56:16.189064 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/enable-default-cni-556491/client.crt: no such file or directory
E0805 12:56:16.269415 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/enable-default-cni-556491/client.crt: no such file or directory
E0805 12:56:16.429661 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/enable-default-cni-556491/client.crt: no such file or directory
E0805 12:56:16.750357 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/enable-default-cni-556491/client.crt: no such file or directory
E0805 12:56:17.391341 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/enable-default-cni-556491/client.crt: no such file or directory
E0805 12:56:18.671641 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/enable-default-cni-556491/client.crt: no such file or directory
E0805 12:56:18.940120 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/flannel-556491/client.crt: no such file or directory
E0805 12:56:18.945378 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/flannel-556491/client.crt: no such file or directory
E0805 12:56:18.955733 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/flannel-556491/client.crt: no such file or directory
E0805 12:56:18.976155 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/flannel-556491/client.crt: no such file or directory
E0805 12:56:19.016415 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/flannel-556491/client.crt: no such file or directory
E0805 12:56:19.096845 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/flannel-556491/client.crt: no such file or directory
E0805 12:56:19.257209 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/flannel-556491/client.crt: no such file or directory
E0805 12:56:19.577794 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/flannel-556491/client.crt: no such file or directory
E0805 12:56:20.218051 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/flannel-556491/client.crt: no such file or directory
E0805 12:56:21.231932 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/enable-default-cni-556491/client.crt: no such file or directory
E0805 12:56:21.498308 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/flannel-556491/client.crt: no such file or directory
E0805 12:56:24.059309 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/flannel-556491/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-732633 --alsologtostderr -v=3: (11.016345051s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-732633 -n old-k8s-version-732633
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-732633 -n old-k8s-version-732633: exit status 7 (117.207616ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-732633 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (122.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-732633 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0805 12:56:26.353420 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/enable-default-cni-556491/client.crt: no such file or directory
E0805 12:56:29.179807 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/flannel-556491/client.crt: no such file or directory
E0805 12:56:34.908345 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/functional-644345/client.crt: no such file or directory
E0805 12:56:36.594372 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/enable-default-cni-556491/client.crt: no such file or directory
E0805 12:56:39.419953 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/flannel-556491/client.crt: no such file or directory
E0805 12:56:47.614410 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/custom-flannel-556491/client.crt: no such file or directory
E0805 12:56:51.859842 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/functional-644345/client.crt: no such file or directory
E0805 12:56:57.074587 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/enable-default-cni-556491/client.crt: no such file or directory
E0805 12:56:59.900949 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/flannel-556491/client.crt: no such file or directory
E0805 12:57:06.896993 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/kindnet-556491/client.crt: no such file or directory
E0805 12:57:16.509887 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/false-556491/client.crt: no such file or directory
E0805 12:57:34.582424 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/kindnet-556491/client.crt: no such file or directory
E0805 12:57:38.035063 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/enable-default-cni-556491/client.crt: no such file or directory
E0805 12:57:40.861128 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/flannel-556491/client.crt: no such file or directory
E0805 12:57:44.028494 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/calico-556491/client.crt: no such file or directory
E0805 12:57:46.595124 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/skaffold-352302/client.crt: no such file or directory
E0805 12:57:49.018384 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/bridge-556491/client.crt: no such file or directory
E0805 12:57:49.023812 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/bridge-556491/client.crt: no such file or directory
E0805 12:57:49.034089 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/bridge-556491/client.crt: no such file or directory
E0805 12:57:49.054364 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/bridge-556491/client.crt: no such file or directory
E0805 12:57:49.095039 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/bridge-556491/client.crt: no such file or directory
E0805 12:57:49.175307 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/bridge-556491/client.crt: no such file or directory
E0805 12:57:49.335612 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/bridge-556491/client.crt: no such file or directory
E0805 12:57:49.656649 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/bridge-556491/client.crt: no such file or directory
E0805 12:57:50.297612 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/bridge-556491/client.crt: no such file or directory
E0805 12:57:51.577918 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/bridge-556491/client.crt: no such file or directory
E0805 12:57:54.138672 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/bridge-556491/client.crt: no such file or directory
E0805 12:57:56.678351 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/kubenet-556491/client.crt: no such file or directory
E0805 12:57:56.683960 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/kubenet-556491/client.crt: no such file or directory
E0805 12:57:56.694208 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/kubenet-556491/client.crt: no such file or directory
E0805 12:57:56.714519 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/kubenet-556491/client.crt: no such file or directory
E0805 12:57:56.754750 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/kubenet-556491/client.crt: no such file or directory
E0805 12:57:56.835148 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/kubenet-556491/client.crt: no such file or directory
E0805 12:57:56.995673 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/kubenet-556491/client.crt: no such file or directory
E0805 12:57:57.316300 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/kubenet-556491/client.crt: no such file or directory
E0805 12:57:57.957366 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/kubenet-556491/client.crt: no such file or directory
E0805 12:57:59.237580 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/kubenet-556491/client.crt: no such file or directory
E0805 12:57:59.259812 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/bridge-556491/client.crt: no such file or directory
E0805 12:58:01.797782 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/kubenet-556491/client.crt: no such file or directory
E0805 12:58:06.917928 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/kubenet-556491/client.crt: no such file or directory
E0805 12:58:09.500469 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/bridge-556491/client.crt: no such file or directory
E0805 12:58:11.712448 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/calico-556491/client.crt: no such file or directory
E0805 12:58:17.158939 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/kubenet-556491/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-732633 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m2.147911646s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-732633 -n old-k8s-version-732633
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (122.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-kxfpj" [26b0af8e-a6f0-4db2-a928-f0b865d91fa9] Running
E0805 12:58:29.980615 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/bridge-556491/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004608514s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-kxfpj" [26b0af8e-a6f0-4db2-a928-f0b865d91fa9] Running
E0805 12:58:37.639578 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/kubenet-556491/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004277115s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-732633 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-732633 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-732633 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-732633 -n old-k8s-version-732633
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-732633 -n old-k8s-version-732633: exit status 2 (331.277492ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-732633 -n old-k8s-version-732633
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-732633 -n old-k8s-version-732633: exit status 2 (371.071013ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-732633 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-732633 -n old-k8s-version-732633
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-732633 -n old-k8s-version-732633
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.80s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (48.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-180866 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.3
E0805 12:58:59.955307 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/enable-default-cni-556491/client.crt: no such file or directory
E0805 12:59:02.781576 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/flannel-556491/client.crt: no such file or directory
E0805 12:59:03.772108 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/custom-flannel-556491/client.crt: no such file or directory
E0805 12:59:10.942100 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/bridge-556491/client.crt: no such file or directory
E0805 12:59:18.599960 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/kubenet-556491/client.crt: no such file or directory
E0805 12:59:31.454632 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/custom-flannel-556491/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-180866 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.3: (48.484353438s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (48.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
E0805 12:59:32.666746 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/false-556491/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-180866 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [929f2786-40f8-488f-a987-4653c8e9bb54] Pending
helpers_test.go:344: "busybox" [929f2786-40f8-488f-a987-4653c8e9bb54] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [929f2786-40f8-488f-a987-4653c8e9bb54] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003340177s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-180866 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-180866 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-180866 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-180866 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-180866 --alsologtostderr -v=3: (10.93414482s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-180866 -n embed-certs-180866
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-180866 -n embed-certs-180866: exit status 7 (81.141333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-180866 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (269.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-180866 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.3
E0805 13:00:00.350734 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/false-556491/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-180866 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.3: (4m28.894940434s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-180866 -n embed-certs-180866
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (269.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-d7r6k" [bf66ce84-1104-4117-8c59-9e0bd09e6317] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.012023728s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-d7r6k" [bf66ce84-1104-4117-8c59-9e0bd09e6317] Running
E0805 13:00:25.194920 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/addons-245337/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.02665335s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-688080 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-688080 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-688080 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-688080 -n no-preload-688080
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-688080 -n no-preload-688080: exit status 2 (360.193636ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-688080 -n no-preload-688080
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-688080 -n no-preload-688080: exit status 2 (349.762847ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-688080 --alsologtostderr -v=1
E0805 13:00:32.863119 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/bridge-556491/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-688080 -n no-preload-688080
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-688080 -n no-preload-688080
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.89s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (85.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-592409 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.3
E0805 13:00:40.520424 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/kubenet-556491/client.crt: no such file or directory
E0805 13:00:43.789504 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/auto-556491/client.crt: no such file or directory
E0805 13:01:03.171550 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/old-k8s-version-732633/client.crt: no such file or directory
E0805 13:01:03.176782 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/old-k8s-version-732633/client.crt: no such file or directory
E0805 13:01:03.187029 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/old-k8s-version-732633/client.crt: no such file or directory
E0805 13:01:03.207259 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/old-k8s-version-732633/client.crt: no such file or directory
E0805 13:01:03.247796 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/old-k8s-version-732633/client.crt: no such file or directory
E0805 13:01:03.328081 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/old-k8s-version-732633/client.crt: no such file or directory
E0805 13:01:03.488430 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/old-k8s-version-732633/client.crt: no such file or directory
E0805 13:01:03.808580 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/old-k8s-version-732633/client.crt: no such file or directory
E0805 13:01:04.449585 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/old-k8s-version-732633/client.crt: no such file or directory
E0805 13:01:05.730664 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/old-k8s-version-732633/client.crt: no such file or directory
E0805 13:01:08.290826 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/old-k8s-version-732633/client.crt: no such file or directory
E0805 13:01:13.411506 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/old-k8s-version-732633/client.crt: no such file or directory
E0805 13:01:16.112941 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/enable-default-cni-556491/client.crt: no such file or directory
E0805 13:01:18.939887 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/flannel-556491/client.crt: no such file or directory
E0805 13:01:23.652523 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/old-k8s-version-732633/client.crt: no such file or directory
E0805 13:01:43.796095 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/enable-default-cni-556491/client.crt: no such file or directory
E0805 13:01:44.133613 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/old-k8s-version-732633/client.crt: no such file or directory
E0805 13:01:46.622497 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/flannel-556491/client.crt: no such file or directory
E0805 13:01:48.242674 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/addons-245337/client.crt: no such file or directory
E0805 13:01:51.860415 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/functional-644345/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-592409 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.3: (1m25.385793618s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (85.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-592409 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c2358124-1d48-4d65-9cda-dd3777bf6f14] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c2358124-1d48-4d65-9cda-dd3777bf6f14] Running
E0805 13:02:06.897794 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/kindnet-556491/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003664427s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-592409 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-592409 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-592409 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.03007786s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-592409 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-592409 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-592409 --alsologtostderr -v=3: (11.206874945s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-592409 -n default-k8s-diff-port-592409
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-592409 -n default-k8s-diff-port-592409: exit status 7 (81.72966ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-592409 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (289.62s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-592409 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.3
E0805 13:02:25.094827 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/old-k8s-version-732633/client.crt: no such file or directory
E0805 13:02:44.029135 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/calico-556491/client.crt: no such file or directory
E0805 13:02:46.595983 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/skaffold-352302/client.crt: no such file or directory
E0805 13:02:49.018415 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/bridge-556491/client.crt: no such file or directory
E0805 13:02:56.677697 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/kubenet-556491/client.crt: no such file or directory
E0805 13:03:16.704290 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/bridge-556491/client.crt: no such file or directory
E0805 13:03:24.360585 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/kubenet-556491/client.crt: no such file or directory
E0805 13:03:47.015922 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/old-k8s-version-732633/client.crt: no such file or directory
E0805 13:04:03.772469 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/custom-flannel-556491/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-592409 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.3: (4m49.28740219s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-592409 -n default-k8s-diff-port-592409
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (289.62s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-7qqwq" [4655c865-921f-48b6-82af-fdac3df434bd] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005439127s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-7qqwq" [4655c865-921f-48b6-82af-fdac3df434bd] Running
E0805 13:04:32.665900 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/false-556491/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006339741s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-180866 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-180866 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-180866 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-180866 -n embed-certs-180866
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-180866 -n embed-certs-180866: exit status 2 (337.063841ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-180866 -n embed-certs-180866
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-180866 -n embed-certs-180866: exit status 2 (349.766031ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-180866 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-180866 -n embed-certs-180866
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-180866 -n embed-certs-180866
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.94s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (40.97s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-851849 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0-rc.0
E0805 13:05:05.902746 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/no-preload-688080/client.crt: no such file or directory
E0805 13:05:05.908293 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/no-preload-688080/client.crt: no such file or directory
E0805 13:05:05.918543 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/no-preload-688080/client.crt: no such file or directory
E0805 13:05:05.938845 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/no-preload-688080/client.crt: no such file or directory
E0805 13:05:05.979138 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/no-preload-688080/client.crt: no such file or directory
E0805 13:05:06.059546 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/no-preload-688080/client.crt: no such file or directory
E0805 13:05:06.220556 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/no-preload-688080/client.crt: no such file or directory
E0805 13:05:06.541101 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/no-preload-688080/client.crt: no such file or directory
E0805 13:05:07.182105 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/no-preload-688080/client.crt: no such file or directory
E0805 13:05:08.462383 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/no-preload-688080/client.crt: no such file or directory
E0805 13:05:11.022604 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/no-preload-688080/client.crt: no such file or directory
E0805 13:05:16.143319 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/no-preload-688080/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-851849 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0-rc.0: (40.970674127s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (40.97s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-851849 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-851849 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.197476792s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (5.8s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-851849 --alsologtostderr -v=3
E0805 13:05:25.194314 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/addons-245337/client.crt: no such file or directory
E0805 13:05:26.383572 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/no-preload-688080/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-851849 --alsologtostderr -v=3: (5.801157807s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (5.80s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-851849 -n newest-cni-851849
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-851849 -n newest-cni-851849: exit status 7 (71.59019ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-851849 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (20.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-851849 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0-rc.0
E0805 13:05:43.790213 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/auto-556491/client.crt: no such file or directory
E0805 13:05:46.864360 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/no-preload-688080/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-851849 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0-rc.0: (19.746910865s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-851849 -n newest-cni-851849
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (20.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-851849 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.83s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-851849 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-851849 -n newest-cni-851849
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-851849 -n newest-cni-851849: exit status 2 (420.916647ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-851849 -n newest-cni-851849
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-851849 -n newest-cni-851849: exit status 2 (444.040747ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-851849 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-851849 -n newest-cni-851849
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-851849 -n newest-cni-851849
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.64s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-pkwmz" [064cd00f-441b-4cd0-84a5-efa7cae1aba0] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003612681s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-pkwmz" [064cd00f-441b-4cd0-84a5-efa7cae1aba0] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003821671s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-592409 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-592409 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.83s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-592409 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-592409 -n default-k8s-diff-port-592409
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-592409 -n default-k8s-diff-port-592409: exit status 2 (322.09469ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-592409 -n default-k8s-diff-port-592409
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-592409 -n default-k8s-diff-port-592409: exit status 2 (305.941429ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-592409 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-592409 -n default-k8s-diff-port-592409
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-592409 -n default-k8s-diff-port-592409
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.83s)

                                                
                                    

Test skip (27/350)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.51s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-205372 --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-205372" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-205372
--- SKIP: TestDownloadOnlyKic (0.51s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-556491 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-556491

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-556491

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-556491

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-556491

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-556491

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-556491

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-556491

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-556491

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-556491

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-556491

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-556491" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-556491"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-556491" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-556491"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-556491" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-556491"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-556491

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-556491" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-556491"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-556491" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-556491"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-556491" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-556491" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-556491" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-556491" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-556491" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-556491" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-556491" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-556491" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-556491" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-556491"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-556491" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-556491"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-556491" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-556491"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-556491" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-556491"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-556491" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-556491"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-556491

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-556491

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-556491" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-556491" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-556491

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-556491

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-556491" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-556491" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-556491" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-556491" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-556491" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-556491" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-556491"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-556491" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-556491"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-556491" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-556491"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-556491" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-556491"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-556491" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-556491"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-556491

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-556491" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-556491"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-556491" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-556491"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-556491" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-556491"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-556491" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-556491"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-556491" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-556491"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-556491" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-556491"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-556491" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-556491"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-556491" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-556491"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-556491" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-556491"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-556491" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-556491"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-556491" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-556491"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-556491" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-556491"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-556491" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-556491"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-556491" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-556491"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-556491" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-556491"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-556491" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-556491"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-556491" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-556491"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-556491" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-556491"

                                                
                                                
----------------------- debugLogs end: cilium-556491 [took: 5.093353506s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-556491" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-556491
--- SKIP: TestNetworkPlugins/group/cilium (5.31s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-030164" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-030164
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
Copied to clipboard