Test Report: Docker_Linux_crio_arm64 20319

                    
                      648f194b476483b13df21998417ef6977c25d9d6:2025-01-27:38091
                    
                

Test fail (3/330)

Order failed test Duration
36 TestAddons/parallel/Ingress 153.92
246 TestPreload 2404.7
248 TestScheduledStopUnix 37.27
x
+
TestAddons/parallel/Ingress (153.92s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-334107 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-334107 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-334107 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [edb038e0-03b3-4922-9f77-b29edd6ec56e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [edb038e0-03b3-4922-9f77-b29edd6ec56e] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.003865229s
I0127 11:22:57.601156  305936 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-334107 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-334107 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.876954615s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-334107 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-334107 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-334107
helpers_test.go:235: (dbg) docker inspect addons-334107:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e75c4376fc5aa457bbcb5fe886c60de56bfdf330b1f67dd618e9e622acf2db82",
	        "Created": "2025-01-27T11:18:20.414975298Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 307203,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-01-27T11:18:20.583771431Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0434cf58b6dbace281e5de753aa4b2e3fe33dc9a3be53021531403743c3f155a",
	        "ResolvConfPath": "/var/lib/docker/containers/e75c4376fc5aa457bbcb5fe886c60de56bfdf330b1f67dd618e9e622acf2db82/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e75c4376fc5aa457bbcb5fe886c60de56bfdf330b1f67dd618e9e622acf2db82/hostname",
	        "HostsPath": "/var/lib/docker/containers/e75c4376fc5aa457bbcb5fe886c60de56bfdf330b1f67dd618e9e622acf2db82/hosts",
	        "LogPath": "/var/lib/docker/containers/e75c4376fc5aa457bbcb5fe886c60de56bfdf330b1f67dd618e9e622acf2db82/e75c4376fc5aa457bbcb5fe886c60de56bfdf330b1f67dd618e9e622acf2db82-json.log",
	        "Name": "/addons-334107",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-334107:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-334107",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4ca3765ee08fa2afdefdeb218bb434b1fff4e4b4dd240a284ea8f370a8b8f954-init/diff:/var/lib/docker/overlay2/f9679fb4b68b50924b42b41bb8163a036f86217b5bdb257ff1bd6b1d4c169198/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4ca3765ee08fa2afdefdeb218bb434b1fff4e4b4dd240a284ea8f370a8b8f954/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4ca3765ee08fa2afdefdeb218bb434b1fff4e4b4dd240a284ea8f370a8b8f954/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4ca3765ee08fa2afdefdeb218bb434b1fff4e4b4dd240a284ea8f370a8b8f954/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-334107",
	                "Source": "/var/lib/docker/volumes/addons-334107/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-334107",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-334107",
	                "name.minikube.sigs.k8s.io": "addons-334107",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a145692d397ae85f7284c7df641b64de5f95fdd95b8a186f276fb0dd81ae2c76",
	            "SandboxKey": "/var/run/docker/netns/a145692d397a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-334107": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "79dc6f937b3b4b82d5e026d04fe00ec980cd5c3ff6ecca4da43f8c26b5b0c990",
	                    "EndpointID": "4db2b71ad62fcbcadfa6d969ffcdb48c161e98a2285f03a39c9afc775a6c65d6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-334107",
	                        "e75c4376fc5a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-334107 -n addons-334107
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-334107 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-334107 logs -n 25: (1.628159103s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-227627                                                                     | download-only-227627   | jenkins | v1.35.0 | 27 Jan 25 11:17 UTC | 27 Jan 25 11:17 UTC |
	| start   | --download-only -p                                                                          | download-docker-159827 | jenkins | v1.35.0 | 27 Jan 25 11:17 UTC |                     |
	|         | download-docker-159827                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-159827                                                                   | download-docker-159827 | jenkins | v1.35.0 | 27 Jan 25 11:17 UTC | 27 Jan 25 11:17 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-166762   | jenkins | v1.35.0 | 27 Jan 25 11:17 UTC |                     |
	|         | binary-mirror-166762                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:39903                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-166762                                                                     | binary-mirror-166762   | jenkins | v1.35.0 | 27 Jan 25 11:17 UTC | 27 Jan 25 11:17 UTC |
	| addons  | disable dashboard -p                                                                        | addons-334107          | jenkins | v1.35.0 | 27 Jan 25 11:17 UTC |                     |
	|         | addons-334107                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-334107          | jenkins | v1.35.0 | 27 Jan 25 11:17 UTC |                     |
	|         | addons-334107                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-334107 --wait=true                                                                | addons-334107          | jenkins | v1.35.0 | 27 Jan 25 11:17 UTC | 27 Jan 25 11:20 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	| addons  | addons-334107 addons disable                                                                | addons-334107          | jenkins | v1.35.0 | 27 Jan 25 11:20 UTC | 27 Jan 25 11:20 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-334107 addons disable                                                                | addons-334107          | jenkins | v1.35.0 | 27 Jan 25 11:21 UTC | 27 Jan 25 11:21 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-334107          | jenkins | v1.35.0 | 27 Jan 25 11:21 UTC | 27 Jan 25 11:21 UTC |
	|         | -p addons-334107                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-334107 addons disable                                                                | addons-334107          | jenkins | v1.35.0 | 27 Jan 25 11:21 UTC | 27 Jan 25 11:21 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-334107 ip                                                                            | addons-334107          | jenkins | v1.35.0 | 27 Jan 25 11:21 UTC | 27 Jan 25 11:21 UTC |
	| addons  | addons-334107 addons disable                                                                | addons-334107          | jenkins | v1.35.0 | 27 Jan 25 11:21 UTC | 27 Jan 25 11:21 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-334107 addons disable                                                                | addons-334107          | jenkins | v1.35.0 | 27 Jan 25 11:21 UTC | 27 Jan 25 11:21 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | addons-334107 addons                                                                        | addons-334107          | jenkins | v1.35.0 | 27 Jan 25 11:21 UTC | 27 Jan 25 11:21 UTC |
	|         | disable nvidia-device-plugin                                                                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-334107 ssh cat                                                                       | addons-334107          | jenkins | v1.35.0 | 27 Jan 25 11:21 UTC | 27 Jan 25 11:21 UTC |
	|         | /opt/local-path-provisioner/pvc-2074cc79-7217-4577-855d-67765c1957bf_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-334107 addons disable                                                                | addons-334107          | jenkins | v1.35.0 | 27 Jan 25 11:21 UTC | 27 Jan 25 11:22 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-334107 addons                                                                        | addons-334107          | jenkins | v1.35.0 | 27 Jan 25 11:21 UTC | 27 Jan 25 11:21 UTC |
	|         | disable cloud-spanner                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-334107 addons                                                                        | addons-334107          | jenkins | v1.35.0 | 27 Jan 25 11:21 UTC | 27 Jan 25 11:21 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-334107 addons                                                                        | addons-334107          | jenkins | v1.35.0 | 27 Jan 25 11:22 UTC | 27 Jan 25 11:22 UTC |
	|         | disable inspektor-gadget                                                                    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-334107 addons                                                                        | addons-334107          | jenkins | v1.35.0 | 27 Jan 25 11:22 UTC | 27 Jan 25 11:22 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-334107 addons                                                                        | addons-334107          | jenkins | v1.35.0 | 27 Jan 25 11:22 UTC | 27 Jan 25 11:22 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-334107 ssh curl -s                                                                   | addons-334107          | jenkins | v1.35.0 | 27 Jan 25 11:22 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-334107 ip                                                                            | addons-334107          | jenkins | v1.35.0 | 27 Jan 25 11:25 UTC | 27 Jan 25 11:25 UTC |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 11:17:55
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 11:17:55.268436  306694 out.go:345] Setting OutFile to fd 1 ...
	I0127 11:17:55.268556  306694 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:17:55.268566  306694 out.go:358] Setting ErrFile to fd 2...
	I0127 11:17:55.268571  306694 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:17:55.268807  306694 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-300538/.minikube/bin
	I0127 11:17:55.269233  306694 out.go:352] Setting JSON to false
	I0127 11:17:55.270084  306694 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7223,"bootTime":1737969453,"procs":164,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0127 11:17:55.270162  306694 start.go:139] virtualization:  
	I0127 11:17:55.273601  306694 out.go:177] * [addons-334107] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0127 11:17:55.277445  306694 out.go:177]   - MINIKUBE_LOCATION=20319
	I0127 11:17:55.277554  306694 notify.go:220] Checking for updates...
	I0127 11:17:55.283223  306694 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 11:17:55.285998  306694 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20319-300538/kubeconfig
	I0127 11:17:55.288817  306694 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-300538/.minikube
	I0127 11:17:55.291691  306694 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0127 11:17:55.294512  306694 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 11:17:55.297571  306694 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 11:17:55.326550  306694 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0127 11:17:55.326670  306694 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 11:17:55.382705  306694 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:46 SystemTime:2025-01-27 11:17:55.373716978 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 11:17:55.382823  306694 docker.go:318] overlay module found
	I0127 11:17:55.385969  306694 out.go:177] * Using the docker driver based on user configuration
	I0127 11:17:55.388860  306694 start.go:297] selected driver: docker
	I0127 11:17:55.388882  306694 start.go:901] validating driver "docker" against <nil>
	I0127 11:17:55.388897  306694 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 11:17:55.389636  306694 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 11:17:55.446677  306694 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:46 SystemTime:2025-01-27 11:17:55.437249807 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 11:17:55.446900  306694 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 11:17:55.447159  306694 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 11:17:55.450144  306694 out.go:177] * Using Docker driver with root privileges
	I0127 11:17:55.453138  306694 cni.go:84] Creating CNI manager for ""
	I0127 11:17:55.453212  306694 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0127 11:17:55.453229  306694 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0127 11:17:55.453323  306694 start.go:340] cluster config:
	{Name:addons-334107 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-334107 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Network
Plugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPau
seInterval:1m0s}
	I0127 11:17:55.456484  306694 out.go:177] * Starting "addons-334107" primary control-plane node in "addons-334107" cluster
	I0127 11:17:55.459375  306694 cache.go:121] Beginning downloading kic base image for docker with crio
	I0127 11:17:55.462306  306694 out.go:177] * Pulling base image v0.0.46 ...
	I0127 11:17:55.465050  306694 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 11:17:55.465101  306694 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20319-300538/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-arm64.tar.lz4
	I0127 11:17:55.465115  306694 cache.go:56] Caching tarball of preloaded images
	I0127 11:17:55.465143  306694 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I0127 11:17:55.465199  306694 preload.go:172] Found /home/jenkins/minikube-integration/20319-300538/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0127 11:17:55.465210  306694 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0127 11:17:55.465559  306694 profile.go:143] Saving config to /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/config.json ...
	I0127 11:17:55.465635  306694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/config.json: {Name:mkd806ba61e5b6dcf4a537c937618b05ca59010c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:17:55.480875  306694 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 to local cache
	I0127 11:17:55.480998  306694 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local cache directory
	I0127 11:17:55.481022  306694 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local cache directory, skipping pull
	I0127 11:17:55.481028  306694 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in cache, skipping pull
	I0127 11:17:55.481035  306694 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 as a tarball
	I0127 11:17:55.481052  306694 cache.go:163] Loading gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 from local cache
	I0127 11:18:13.147121  306694 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 from cached tarball
	I0127 11:18:13.147161  306694 cache.go:227] Successfully downloaded all kic artifacts
	I0127 11:18:13.147210  306694 start.go:360] acquireMachinesLock for addons-334107: {Name:mk6bcedb76e58cf174b9564362eeb9eb1fada087 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:18:13.147351  306694 start.go:364] duration metric: took 115.675µs to acquireMachinesLock for "addons-334107"
	I0127 11:18:13.147394  306694 start.go:93] Provisioning new machine with config: &{Name:addons-334107 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-334107 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVM
netClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 11:18:13.147471  306694 start.go:125] createHost starting for "" (driver="docker")
	I0127 11:18:13.150906  306694 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0127 11:18:13.151183  306694 start.go:159] libmachine.API.Create for "addons-334107" (driver="docker")
	I0127 11:18:13.151225  306694 client.go:168] LocalClient.Create starting
	I0127 11:18:13.151335  306694 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20319-300538/.minikube/certs/ca.pem
	I0127 11:18:13.589654  306694 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20319-300538/.minikube/certs/cert.pem
	I0127 11:18:13.942501  306694 cli_runner.go:164] Run: docker network inspect addons-334107 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0127 11:18:13.958682  306694 cli_runner.go:211] docker network inspect addons-334107 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0127 11:18:13.958775  306694 network_create.go:284] running [docker network inspect addons-334107] to gather additional debugging logs...
	I0127 11:18:13.958797  306694 cli_runner.go:164] Run: docker network inspect addons-334107
	W0127 11:18:13.977991  306694 cli_runner.go:211] docker network inspect addons-334107 returned with exit code 1
	I0127 11:18:13.978025  306694 network_create.go:287] error running [docker network inspect addons-334107]: docker network inspect addons-334107: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-334107 not found
	I0127 11:18:13.978040  306694 network_create.go:289] output of [docker network inspect addons-334107]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-334107 not found
	
	** /stderr **
	I0127 11:18:13.978140  306694 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0127 11:18:13.995635  306694 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001954dd0}
	I0127 11:18:13.995681  306694 network_create.go:124] attempt to create docker network addons-334107 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0127 11:18:13.995737  306694 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-334107 addons-334107
	I0127 11:18:14.068710  306694 network_create.go:108] docker network addons-334107 192.168.49.0/24 created
	I0127 11:18:14.068746  306694 kic.go:121] calculated static IP "192.168.49.2" for the "addons-334107" container
	I0127 11:18:14.068825  306694 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0127 11:18:14.084369  306694 cli_runner.go:164] Run: docker volume create addons-334107 --label name.minikube.sigs.k8s.io=addons-334107 --label created_by.minikube.sigs.k8s.io=true
	I0127 11:18:14.102689  306694 oci.go:103] Successfully created a docker volume addons-334107
	I0127 11:18:14.102797  306694 cli_runner.go:164] Run: docker run --rm --name addons-334107-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-334107 --entrypoint /usr/bin/test -v addons-334107:/var gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -d /var/lib
	I0127 11:18:16.124350  306694 cli_runner.go:217] Completed: docker run --rm --name addons-334107-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-334107 --entrypoint /usr/bin/test -v addons-334107:/var gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -d /var/lib: (2.021513389s)
	I0127 11:18:16.124383  306694 oci.go:107] Successfully prepared a docker volume addons-334107
	I0127 11:18:16.124413  306694 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 11:18:16.124434  306694 kic.go:194] Starting extracting preloaded images to volume ...
	I0127 11:18:16.124507  306694 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20319-300538/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-334107:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir
	I0127 11:18:20.347503  306694 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20319-300538/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-334107:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir: (4.222934416s)
	I0127 11:18:20.347540  306694 kic.go:203] duration metric: took 4.223102432s to extract preloaded images to volume ...
	W0127 11:18:20.347681  306694 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0127 11:18:20.347793  306694 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0127 11:18:20.399919  306694 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-334107 --name addons-334107 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-334107 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-334107 --network addons-334107 --ip 192.168.49.2 --volume addons-334107:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279
	I0127 11:18:20.757121  306694 cli_runner.go:164] Run: docker container inspect addons-334107 --format={{.State.Running}}
	I0127 11:18:20.779614  306694 cli_runner.go:164] Run: docker container inspect addons-334107 --format={{.State.Status}}
	I0127 11:18:20.808186  306694 cli_runner.go:164] Run: docker exec addons-334107 stat /var/lib/dpkg/alternatives/iptables
	I0127 11:18:20.860119  306694 oci.go:144] the created container "addons-334107" has a running status.
	I0127 11:18:20.860148  306694 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20319-300538/.minikube/machines/addons-334107/id_rsa...
	I0127 11:18:21.339811  306694 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20319-300538/.minikube/machines/addons-334107/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0127 11:18:21.364612  306694 cli_runner.go:164] Run: docker container inspect addons-334107 --format={{.State.Status}}
	I0127 11:18:21.392500  306694 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0127 11:18:21.392522  306694 kic_runner.go:114] Args: [docker exec --privileged addons-334107 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0127 11:18:21.454624  306694 cli_runner.go:164] Run: docker container inspect addons-334107 --format={{.State.Status}}
	I0127 11:18:21.476667  306694 machine.go:93] provisionDockerMachine start ...
	I0127 11:18:21.476764  306694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-334107
	I0127 11:18:21.508825  306694 main.go:141] libmachine: Using SSH client type: native
	I0127 11:18:21.509089  306694 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0127 11:18:21.509100  306694 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 11:18:21.662948  306694 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-334107
	
	I0127 11:18:21.663013  306694 ubuntu.go:169] provisioning hostname "addons-334107"
	I0127 11:18:21.663140  306694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-334107
	I0127 11:18:21.684840  306694 main.go:141] libmachine: Using SSH client type: native
	I0127 11:18:21.685088  306694 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0127 11:18:21.685107  306694 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-334107 && echo "addons-334107" | sudo tee /etc/hostname
	I0127 11:18:21.829630  306694 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-334107
	
	I0127 11:18:21.829715  306694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-334107
	I0127 11:18:21.852704  306694 main.go:141] libmachine: Using SSH client type: native
	I0127 11:18:21.852953  306694 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0127 11:18:21.852970  306694 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-334107' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-334107/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-334107' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 11:18:21.983103  306694 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 11:18:21.983142  306694 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20319-300538/.minikube CaCertPath:/home/jenkins/minikube-integration/20319-300538/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20319-300538/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20319-300538/.minikube}
	I0127 11:18:21.983171  306694 ubuntu.go:177] setting up certificates
	I0127 11:18:21.983181  306694 provision.go:84] configureAuth start
	I0127 11:18:21.983247  306694 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-334107
	I0127 11:18:22.000689  306694 provision.go:143] copyHostCerts
	I0127 11:18:22.000766  306694 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-300538/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20319-300538/.minikube/ca.pem (1082 bytes)
	I0127 11:18:22.000880  306694 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-300538/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20319-300538/.minikube/cert.pem (1123 bytes)
	I0127 11:18:22.000934  306694 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-300538/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20319-300538/.minikube/key.pem (1679 bytes)
	I0127 11:18:22.000994  306694 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20319-300538/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20319-300538/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20319-300538/.minikube/certs/ca-key.pem org=jenkins.addons-334107 san=[127.0.0.1 192.168.49.2 addons-334107 localhost minikube]
	I0127 11:18:22.817688  306694 provision.go:177] copyRemoteCerts
	I0127 11:18:22.817763  306694 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 11:18:22.817807  306694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-334107
	I0127 11:18:22.835798  306694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/addons-334107/id_rsa Username:docker}
	I0127 11:18:22.924117  306694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0127 11:18:22.949027  306694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0127 11:18:22.973109  306694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 11:18:22.997205  306694 provision.go:87] duration metric: took 1.014006091s to configureAuth
	I0127 11:18:22.997231  306694 ubuntu.go:193] setting minikube options for container-runtime
	I0127 11:18:22.997446  306694 config.go:182] Loaded profile config "addons-334107": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 11:18:22.997553  306694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-334107
	I0127 11:18:23.017835  306694 main.go:141] libmachine: Using SSH client type: native
	I0127 11:18:23.018111  306694 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0127 11:18:23.018140  306694 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 11:18:23.246646  306694 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 11:18:23.246672  306694 machine.go:96] duration metric: took 1.769980924s to provisionDockerMachine
	I0127 11:18:23.246683  306694 client.go:171] duration metric: took 10.095447609s to LocalClient.Create
	I0127 11:18:23.246697  306694 start.go:167] duration metric: took 10.095514382s to libmachine.API.Create "addons-334107"
	I0127 11:18:23.246705  306694 start.go:293] postStartSetup for "addons-334107" (driver="docker")
	I0127 11:18:23.246716  306694 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 11:18:23.246784  306694 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 11:18:23.246830  306694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-334107
	I0127 11:18:23.264463  306694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/addons-334107/id_rsa Username:docker}
	I0127 11:18:23.357043  306694 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 11:18:23.360470  306694 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0127 11:18:23.360513  306694 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0127 11:18:23.360524  306694 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0127 11:18:23.360532  306694 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0127 11:18:23.360544  306694 filesync.go:126] Scanning /home/jenkins/minikube-integration/20319-300538/.minikube/addons for local assets ...
	I0127 11:18:23.360627  306694 filesync.go:126] Scanning /home/jenkins/minikube-integration/20319-300538/.minikube/files for local assets ...
	I0127 11:18:23.360655  306694 start.go:296] duration metric: took 113.944853ms for postStartSetup
	I0127 11:18:23.360985  306694 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-334107
	I0127 11:18:23.378465  306694 profile.go:143] Saving config to /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/config.json ...
	I0127 11:18:23.378769  306694 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 11:18:23.378828  306694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-334107
	I0127 11:18:23.395820  306694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/addons-334107/id_rsa Username:docker}
	I0127 11:18:23.483947  306694 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0127 11:18:23.488560  306694 start.go:128] duration metric: took 10.341071782s to createHost
	I0127 11:18:23.488586  306694 start.go:83] releasing machines lock for "addons-334107", held for 10.34122086s
	I0127 11:18:23.488658  306694 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-334107
	I0127 11:18:23.506292  306694 ssh_runner.go:195] Run: cat /version.json
	I0127 11:18:23.506353  306694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-334107
	I0127 11:18:23.506627  306694 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 11:18:23.506698  306694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-334107
	I0127 11:18:23.529800  306694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/addons-334107/id_rsa Username:docker}
	I0127 11:18:23.539452  306694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/addons-334107/id_rsa Username:docker}
	I0127 11:18:23.614358  306694 ssh_runner.go:195] Run: systemctl --version
	I0127 11:18:23.742475  306694 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 11:18:23.883901  306694 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0127 11:18:23.888272  306694 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 11:18:23.910231  306694 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0127 11:18:23.910341  306694 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 11:18:23.944128  306694 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0127 11:18:23.944154  306694 start.go:495] detecting cgroup driver to use...
	I0127 11:18:23.944188  306694 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0127 11:18:23.944239  306694 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 11:18:23.961079  306694 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 11:18:23.972792  306694 docker.go:217] disabling cri-docker service (if available) ...
	I0127 11:18:23.972908  306694 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 11:18:23.986720  306694 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 11:18:24.001218  306694 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 11:18:24.096027  306694 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 11:18:24.188919  306694 docker.go:233] disabling docker service ...
	I0127 11:18:24.188998  306694 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 11:18:24.208975  306694 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 11:18:24.220620  306694 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 11:18:24.310700  306694 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 11:18:24.409483  306694 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 11:18:24.421152  306694 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 11:18:24.437996  306694 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0127 11:18:24.438106  306694 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:18:24.448595  306694 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 11:18:24.448676  306694 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:18:24.458107  306694 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:18:24.467728  306694 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:18:24.477336  306694 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 11:18:24.486600  306694 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:18:24.496306  306694 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:18:24.512407  306694 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:18:24.523053  306694 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 11:18:24.531817  306694 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 11:18:24.540232  306694 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:18:24.617736  306694 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 11:18:24.718223  306694 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 11:18:24.718325  306694 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 11:18:24.721738  306694 start.go:563] Will wait 60s for crictl version
	I0127 11:18:24.721806  306694 ssh_runner.go:195] Run: which crictl
	I0127 11:18:24.725161  306694 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 11:18:24.766626  306694 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0127 11:18:24.766791  306694 ssh_runner.go:195] Run: crio --version
	I0127 11:18:24.805456  306694 ssh_runner.go:195] Run: crio --version
	I0127 11:18:24.851200  306694 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.24.6 ...
	I0127 11:18:24.854152  306694 cli_runner.go:164] Run: docker network inspect addons-334107 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0127 11:18:24.869945  306694 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0127 11:18:24.873399  306694 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 11:18:24.883708  306694 kubeadm.go:883] updating cluster {Name:addons-334107 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-334107 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 11:18:24.883822  306694 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 11:18:24.883881  306694 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 11:18:24.965368  306694 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 11:18:24.965394  306694 crio.go:433] Images already preloaded, skipping extraction
	I0127 11:18:24.965487  306694 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 11:18:25.003799  306694 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 11:18:25.003823  306694 cache_images.go:84] Images are preloaded, skipping loading
	I0127 11:18:25.003837  306694 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.32.1 crio true true} ...
	I0127 11:18:25.003941  306694 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-334107 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:addons-334107 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 11:18:25.004027  306694 ssh_runner.go:195] Run: crio config
	I0127 11:18:25.066409  306694 cni.go:84] Creating CNI manager for ""
	I0127 11:18:25.066435  306694 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0127 11:18:25.066447  306694 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 11:18:25.066500  306694 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-334107 NodeName:addons-334107 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 11:18:25.066676  306694 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-334107"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 11:18:25.066812  306694 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 11:18:25.075831  306694 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 11:18:25.075936  306694 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 11:18:25.085360  306694 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0127 11:18:25.104417  306694 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 11:18:25.123099  306694 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I0127 11:18:25.141526  306694 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0127 11:18:25.145189  306694 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 11:18:25.156044  306694 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:18:25.235116  306694 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 11:18:25.248709  306694 certs.go:68] Setting up /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107 for IP: 192.168.49.2
	I0127 11:18:25.248732  306694 certs.go:194] generating shared ca certs ...
	I0127 11:18:25.248749  306694 certs.go:226] acquiring lock for ca certs: {Name:mk949cfe0d73736f3d2e354b486773524a8fcbe7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:18:25.248884  306694 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20319-300538/.minikube/ca.key
	I0127 11:18:25.678901  306694 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20319-300538/.minikube/ca.crt ...
	I0127 11:18:25.678951  306694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-300538/.minikube/ca.crt: {Name:mkc47a15c0154af90e3a3ef9c58f88ef0bb60e0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:18:25.679175  306694 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20319-300538/.minikube/ca.key ...
	I0127 11:18:25.679192  306694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-300538/.minikube/ca.key: {Name:mk4c82ae59b914290adacec19770153e31e97f17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:18:25.679977  306694 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20319-300538/.minikube/proxy-client-ca.key
	I0127 11:18:26.224239  306694 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20319-300538/.minikube/proxy-client-ca.crt ...
	I0127 11:18:26.224271  306694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-300538/.minikube/proxy-client-ca.crt: {Name:mkd8f25c50161694d6bded52e4748a30a400e1a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:18:26.224467  306694 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20319-300538/.minikube/proxy-client-ca.key ...
	I0127 11:18:26.224480  306694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-300538/.minikube/proxy-client-ca.key: {Name:mk727e60e4ccfafcbcbf30784c37fad2ab19bbe5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:18:26.224564  306694 certs.go:256] generating profile certs ...
	I0127 11:18:26.224626  306694 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/client.key
	I0127 11:18:26.224646  306694 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/client.crt with IP's: []
	I0127 11:18:26.410909  306694 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/client.crt ...
	I0127 11:18:26.410940  306694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/client.crt: {Name:mk6e7c57e7dbc229227b6a9e35580aad51df166d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:18:26.411781  306694 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/client.key ...
	I0127 11:18:26.411797  306694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/client.key: {Name:mk0ec320d845c2a01c435e1054e26c846996f75b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:18:26.412513  306694 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/apiserver.key.2e3a804e
	I0127 11:18:26.412539  306694 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/apiserver.crt.2e3a804e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0127 11:18:26.746595  306694 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/apiserver.crt.2e3a804e ...
	I0127 11:18:26.746628  306694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/apiserver.crt.2e3a804e: {Name:mk0dc0c9e9be380af92c480b7750e8e23f868c84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:18:26.747431  306694 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/apiserver.key.2e3a804e ...
	I0127 11:18:26.747455  306694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/apiserver.key.2e3a804e: {Name:mk49f86b6f7cc48dcba0efa1fd80c70b0c7d7da0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:18:26.748191  306694 certs.go:381] copying /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/apiserver.crt.2e3a804e -> /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/apiserver.crt
	I0127 11:18:26.748378  306694 certs.go:385] copying /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/apiserver.key.2e3a804e -> /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/apiserver.key
	I0127 11:18:26.748440  306694 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/proxy-client.key
	I0127 11:18:26.748466  306694 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/proxy-client.crt with IP's: []
	I0127 11:18:27.286563  306694 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/proxy-client.crt ...
	I0127 11:18:27.286596  306694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/proxy-client.crt: {Name:mk2c929ca433ca4c08301f94f6daa010527b7b00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:18:27.287386  306694 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/proxy-client.key ...
	I0127 11:18:27.287405  306694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/proxy-client.key: {Name:mk7cced1986c1c65a623ebab7f523bf2ea6b533f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:18:27.288202  306694 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-300538/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 11:18:27.288248  306694 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-300538/.minikube/certs/ca.pem (1082 bytes)
	I0127 11:18:27.288291  306694 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-300538/.minikube/certs/cert.pem (1123 bytes)
	I0127 11:18:27.288320  306694 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-300538/.minikube/certs/key.pem (1679 bytes)
	I0127 11:18:27.288956  306694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 11:18:27.314292  306694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0127 11:18:27.338358  306694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 11:18:27.362547  306694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0127 11:18:27.386700  306694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0127 11:18:27.410140  306694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 11:18:27.433968  306694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 11:18:27.458348  306694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 11:18:27.482656  306694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 11:18:27.506580  306694 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 11:18:27.525475  306694 ssh_runner.go:195] Run: openssl version
	I0127 11:18:27.530884  306694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 11:18:27.540561  306694 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:18:27.544222  306694 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 11:18 /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:18:27.544319  306694 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:18:27.551133  306694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 11:18:27.560492  306694 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 11:18:27.563849  306694 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0127 11:18:27.563898  306694 kubeadm.go:392] StartCluster: {Name:addons-334107 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-334107 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:18:27.563981  306694 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 11:18:27.564035  306694 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 11:18:27.601156  306694 cri.go:89] found id: ""
	I0127 11:18:27.601227  306694 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 11:18:27.610284  306694 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 11:18:27.619087  306694 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0127 11:18:27.619154  306694 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 11:18:27.627967  306694 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 11:18:27.627990  306694 kubeadm.go:157] found existing configuration files:
	
	I0127 11:18:27.628049  306694 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 11:18:27.636808  306694 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 11:18:27.636911  306694 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 11:18:27.652274  306694 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 11:18:27.662529  306694 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 11:18:27.662599  306694 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 11:18:27.672263  306694 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 11:18:27.682377  306694 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 11:18:27.682453  306694 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 11:18:27.691964  306694 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 11:18:27.703416  306694 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 11:18:27.703482  306694 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 11:18:27.712187  306694 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0127 11:18:27.753241  306694 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 11:18:27.753304  306694 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 11:18:27.774769  306694 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0127 11:18:27.774858  306694 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1075-aws
	I0127 11:18:27.774899  306694 kubeadm.go:310] OS: Linux
	I0127 11:18:27.774948  306694 kubeadm.go:310] CGROUPS_CPU: enabled
	I0127 11:18:27.775003  306694 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0127 11:18:27.775053  306694 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0127 11:18:27.775125  306694 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0127 11:18:27.775180  306694 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0127 11:18:27.775238  306694 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0127 11:18:27.775288  306694 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0127 11:18:27.775342  306694 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0127 11:18:27.775392  306694 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0127 11:18:27.840112  306694 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 11:18:27.840226  306694 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 11:18:27.840328  306694 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 11:18:27.847625  306694 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 11:18:27.853854  306694 out.go:235]   - Generating certificates and keys ...
	I0127 11:18:27.854062  306694 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 11:18:27.854178  306694 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 11:18:28.498981  306694 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0127 11:18:28.749137  306694 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0127 11:18:28.906486  306694 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0127 11:18:29.151377  306694 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0127 11:18:30.112228  306694 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0127 11:18:30.113822  306694 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-334107 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0127 11:18:30.360925  306694 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0127 11:18:30.361283  306694 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-334107 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0127 11:18:30.507833  306694 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0127 11:18:30.704366  306694 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0127 11:18:31.084068  306694 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0127 11:18:31.084281  306694 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 11:18:31.922760  306694 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 11:18:32.428010  306694 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 11:18:32.824542  306694 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 11:18:33.320313  306694 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 11:18:33.987927  306694 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 11:18:33.988682  306694 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 11:18:33.992307  306694 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 11:18:33.995672  306694 out.go:235]   - Booting up control plane ...
	I0127 11:18:33.995784  306694 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 11:18:33.995869  306694 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 11:18:33.996768  306694 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 11:18:34.007869  306694 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 11:18:34.021676  306694 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 11:18:34.021735  306694 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 11:18:34.118172  306694 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 11:18:34.118295  306694 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 11:18:35.120193  306694 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001721189s
	I0127 11:18:35.120299  306694 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 11:18:42.125285  306694 kubeadm.go:310] [api-check] The API server is healthy after 7.005467746s
	I0127 11:18:42.149569  306694 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 11:18:42.190586  306694 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 11:18:42.234545  306694 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 11:18:42.234755  306694 kubeadm.go:310] [mark-control-plane] Marking the node addons-334107 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 11:18:42.254136  306694 kubeadm.go:310] [bootstrap-token] Using token: 0fw67z.4ysska0bru2nwh14
	I0127 11:18:42.257158  306694 out.go:235]   - Configuring RBAC rules ...
	I0127 11:18:42.257295  306694 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 11:18:42.268069  306694 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 11:18:42.278880  306694 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 11:18:42.287806  306694 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 11:18:42.293748  306694 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 11:18:42.299304  306694 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 11:18:42.533492  306694 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 11:18:42.968460  306694 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 11:18:43.533933  306694 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 11:18:43.533955  306694 kubeadm.go:310] 
	I0127 11:18:43.534016  306694 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 11:18:43.534021  306694 kubeadm.go:310] 
	I0127 11:18:43.534098  306694 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 11:18:43.534103  306694 kubeadm.go:310] 
	I0127 11:18:43.534129  306694 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 11:18:43.534189  306694 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 11:18:43.534240  306694 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 11:18:43.534245  306694 kubeadm.go:310] 
	I0127 11:18:43.534299  306694 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 11:18:43.534304  306694 kubeadm.go:310] 
	I0127 11:18:43.534367  306694 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 11:18:43.534373  306694 kubeadm.go:310] 
	I0127 11:18:43.534425  306694 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 11:18:43.534499  306694 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 11:18:43.534568  306694 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 11:18:43.534581  306694 kubeadm.go:310] 
	I0127 11:18:43.534666  306694 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 11:18:43.534743  306694 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 11:18:43.534747  306694 kubeadm.go:310] 
	I0127 11:18:43.534831  306694 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 0fw67z.4ysska0bru2nwh14 \
	I0127 11:18:43.534940  306694 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:48e67245c146e975d09f55afb746c1ac0255b920d803cd458fd330de66b03567 \
	I0127 11:18:43.534962  306694 kubeadm.go:310] 	--control-plane 
	I0127 11:18:43.534967  306694 kubeadm.go:310] 
	I0127 11:18:43.535051  306694 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 11:18:43.535056  306694 kubeadm.go:310] 
	I0127 11:18:43.535155  306694 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 0fw67z.4ysska0bru2nwh14 \
	I0127 11:18:43.535258  306694 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:48e67245c146e975d09f55afb746c1ac0255b920d803cd458fd330de66b03567 
	I0127 11:18:43.537713  306694 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0127 11:18:43.537940  306694 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1075-aws\n", err: exit status 1
	I0127 11:18:43.538044  306694 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 11:18:43.538066  306694 cni.go:84] Creating CNI manager for ""
	I0127 11:18:43.538074  306694 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0127 11:18:43.543190  306694 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0127 11:18:43.546062  306694 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0127 11:18:43.550050  306694 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.1/kubectl ...
	I0127 11:18:43.550071  306694 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0127 11:18:43.570698  306694 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0127 11:18:43.843648  306694 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 11:18:43.843782  306694 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:18:43.843882  306694 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-334107 minikube.k8s.io/updated_at=2025_01_27T11_18_43_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=35c230aa12d4986001aef5f6e29069f3bc5493aa minikube.k8s.io/name=addons-334107 minikube.k8s.io/primary=true
	I0127 11:18:43.860156  306694 ops.go:34] apiserver oom_adj: -16
	I0127 11:18:43.985924  306694 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:18:44.487016  306694 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:18:44.986685  306694 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:18:45.486844  306694 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:18:45.986741  306694 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:18:46.486350  306694 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:18:46.986203  306694 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:18:47.486177  306694 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:18:47.634743  306694 kubeadm.go:1113] duration metric: took 3.791005626s to wait for elevateKubeSystemPrivileges
	I0127 11:18:47.634796  306694 kubeadm.go:394] duration metric: took 20.070901865s to StartCluster
	I0127 11:18:47.634814  306694 settings.go:142] acquiring lock: {Name:mk59e26dfc61a439e501d9ae8e7cbc4a6f05e310 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:18:47.635571  306694 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20319-300538/kubeconfig
	I0127 11:18:47.636310  306694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-300538/kubeconfig: {Name:mka2258aa0d8dec49c19d97bc831e58d42b19053 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:18:47.639402  306694 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 11:18:47.639830  306694 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0127 11:18:47.640555  306694 config.go:182] Loaded profile config "addons-334107": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 11:18:47.640663  306694 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0127 11:18:47.640779  306694 addons.go:69] Setting yakd=true in profile "addons-334107"
	I0127 11:18:47.640801  306694 addons.go:238] Setting addon yakd=true in "addons-334107"
	I0127 11:18:47.640840  306694 host.go:66] Checking if "addons-334107" exists ...
	I0127 11:18:47.640903  306694 addons.go:69] Setting inspektor-gadget=true in profile "addons-334107"
	I0127 11:18:47.640925  306694 addons.go:238] Setting addon inspektor-gadget=true in "addons-334107"
	I0127 11:18:47.640944  306694 host.go:66] Checking if "addons-334107" exists ...
	I0127 11:18:47.641019  306694 addons.go:69] Setting metrics-server=true in profile "addons-334107"
	I0127 11:18:47.641060  306694 addons.go:238] Setting addon metrics-server=true in "addons-334107"
	I0127 11:18:47.641106  306694 host.go:66] Checking if "addons-334107" exists ...
	I0127 11:18:47.641482  306694 cli_runner.go:164] Run: docker container inspect addons-334107 --format={{.State.Status}}
	I0127 11:18:47.641624  306694 cli_runner.go:164] Run: docker container inspect addons-334107 --format={{.State.Status}}
	I0127 11:18:47.643166  306694 out.go:177] * Verifying Kubernetes components...
	I0127 11:18:47.643267  306694 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-334107"
	I0127 11:18:47.643280  306694 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-334107"
	I0127 11:18:47.643307  306694 host.go:66] Checking if "addons-334107" exists ...
	I0127 11:18:47.643752  306694 cli_runner.go:164] Run: docker container inspect addons-334107 --format={{.State.Status}}
	I0127 11:18:47.644326  306694 cli_runner.go:164] Run: docker container inspect addons-334107 --format={{.State.Status}}
	I0127 11:18:47.644801  306694 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-334107"
	I0127 11:18:47.644825  306694 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-334107"
	I0127 11:18:47.644861  306694 host.go:66] Checking if "addons-334107" exists ...
	I0127 11:18:47.645278  306694 cli_runner.go:164] Run: docker container inspect addons-334107 --format={{.State.Status}}
	I0127 11:18:47.646490  306694 addons.go:69] Setting cloud-spanner=true in profile "addons-334107"
	I0127 11:18:47.646518  306694 addons.go:238] Setting addon cloud-spanner=true in "addons-334107"
	I0127 11:18:47.646547  306694 host.go:66] Checking if "addons-334107" exists ...
	I0127 11:18:47.646991  306694 cli_runner.go:164] Run: docker container inspect addons-334107 --format={{.State.Status}}
	I0127 11:18:47.648564  306694 addons.go:69] Setting registry=true in profile "addons-334107"
	I0127 11:18:47.648592  306694 addons.go:238] Setting addon registry=true in "addons-334107"
	I0127 11:18:47.648623  306694 host.go:66] Checking if "addons-334107" exists ...
	I0127 11:18:47.649064  306694 cli_runner.go:164] Run: docker container inspect addons-334107 --format={{.State.Status}}
	I0127 11:18:47.657990  306694 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-334107"
	I0127 11:18:47.658056  306694 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-334107"
	I0127 11:18:47.658087  306694 host.go:66] Checking if "addons-334107" exists ...
	I0127 11:18:47.658556  306694 cli_runner.go:164] Run: docker container inspect addons-334107 --format={{.State.Status}}
	I0127 11:18:47.660890  306694 addons.go:69] Setting storage-provisioner=true in profile "addons-334107"
	I0127 11:18:47.660947  306694 addons.go:238] Setting addon storage-provisioner=true in "addons-334107"
	I0127 11:18:47.661048  306694 host.go:66] Checking if "addons-334107" exists ...
	I0127 11:18:47.661584  306694 cli_runner.go:164] Run: docker container inspect addons-334107 --format={{.State.Status}}
	I0127 11:18:47.669830  306694 addons.go:69] Setting default-storageclass=true in profile "addons-334107"
	I0127 11:18:47.669867  306694 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-334107"
	I0127 11:18:47.670202  306694 cli_runner.go:164] Run: docker container inspect addons-334107 --format={{.State.Status}}
	I0127 11:18:47.675170  306694 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-334107"
	I0127 11:18:47.675203  306694 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-334107"
	I0127 11:18:47.675552  306694 cli_runner.go:164] Run: docker container inspect addons-334107 --format={{.State.Status}}
	I0127 11:18:47.686688  306694 addons.go:69] Setting volcano=true in profile "addons-334107"
	I0127 11:18:47.686730  306694 addons.go:238] Setting addon volcano=true in "addons-334107"
	I0127 11:18:47.686770  306694 host.go:66] Checking if "addons-334107" exists ...
	I0127 11:18:47.687314  306694 cli_runner.go:164] Run: docker container inspect addons-334107 --format={{.State.Status}}
	I0127 11:18:47.689169  306694 addons.go:69] Setting gcp-auth=true in profile "addons-334107"
	I0127 11:18:47.689198  306694 mustload.go:65] Loading cluster: addons-334107
	I0127 11:18:47.689395  306694 config.go:182] Loaded profile config "addons-334107": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 11:18:47.689645  306694 cli_runner.go:164] Run: docker container inspect addons-334107 --format={{.State.Status}}
	I0127 11:18:47.705063  306694 addons.go:69] Setting volumesnapshots=true in profile "addons-334107"
	I0127 11:18:47.705095  306694 addons.go:238] Setting addon volumesnapshots=true in "addons-334107"
	I0127 11:18:47.705131  306694 host.go:66] Checking if "addons-334107" exists ...
	I0127 11:18:47.705615  306694 cli_runner.go:164] Run: docker container inspect addons-334107 --format={{.State.Status}}
	I0127 11:18:47.707613  306694 addons.go:69] Setting ingress=true in profile "addons-334107"
	I0127 11:18:47.707637  306694 addons.go:238] Setting addon ingress=true in "addons-334107"
	I0127 11:18:47.707755  306694 host.go:66] Checking if "addons-334107" exists ...
	I0127 11:18:47.708237  306694 cli_runner.go:164] Run: docker container inspect addons-334107 --format={{.State.Status}}
	I0127 11:18:47.751310  306694 addons.go:69] Setting ingress-dns=true in profile "addons-334107"
	I0127 11:18:47.751342  306694 addons.go:238] Setting addon ingress-dns=true in "addons-334107"
	I0127 11:18:47.751394  306694 host.go:66] Checking if "addons-334107" exists ...
	I0127 11:18:47.751862  306694 cli_runner.go:164] Run: docker container inspect addons-334107 --format={{.State.Status}}
	I0127 11:18:47.754850  306694 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:18:47.829727  306694 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.36.0
	I0127 11:18:47.836299  306694 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0127 11:18:47.836329  306694 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0127 11:18:47.836400  306694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-334107
	I0127 11:18:47.859990  306694 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0127 11:18:47.863304  306694 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0127 11:18:47.863333  306694 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0127 11:18:47.863407  306694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-334107
	I0127 11:18:47.906321  306694 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I0127 11:18:47.920487  306694 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0127 11:18:47.920508  306694 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0127 11:18:47.920578  306694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-334107
	I0127 11:18:47.940727  306694 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0127 11:18:47.950848  306694 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0127 11:18:47.961825  306694 out.go:177]   - Using image docker.io/registry:2.8.3
	I0127 11:18:47.966839  306694 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0127 11:18:47.973367  306694 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 11:18:47.973389  306694 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 11:18:47.973460  306694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-334107
	I0127 11:18:47.974326  306694 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0127 11:18:47.974380  306694 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0127 11:18:47.974469  306694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-334107
	I0127 11:18:47.991233  306694 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.28
	I0127 11:18:47.998554  306694 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I0127 11:18:47.999353  306694 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0127 11:18:47.999379  306694 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0127 11:18:47.999445  306694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-334107
	I0127 11:18:47.999579  306694 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0127 11:18:48.001553  306694 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0127 11:18:48.001573  306694 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0127 11:18:48.001644  306694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-334107
	I0127 11:18:48.009943  306694 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0127 11:18:48.011525  306694 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-334107"
	I0127 11:18:48.011578  306694 host.go:66] Checking if "addons-334107" exists ...
	I0127 11:18:48.012042  306694 cli_runner.go:164] Run: docker container inspect addons-334107 --format={{.State.Status}}
	W0127 11:18:48.020500  306694 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0127 11:18:48.039917  306694 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0127 11:18:48.046046  306694 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0127 11:18:48.046137  306694 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0127 11:18:48.046243  306694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-334107
	I0127 11:18:48.036987  306694 addons.go:238] Setting addon default-storageclass=true in "addons-334107"
	I0127 11:18:48.037036  306694 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0127 11:18:48.037044  306694 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0127 11:18:48.037048  306694 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0127 11:18:48.046539  306694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/addons-334107/id_rsa Username:docker}
	I0127 11:18:48.046572  306694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/addons-334107/id_rsa Username:docker}
	I0127 11:18:48.064915  306694 host.go:66] Checking if "addons-334107" exists ...
	I0127 11:18:48.072016  306694 host.go:66] Checking if "addons-334107" exists ...
	I0127 11:18:48.072503  306694 cli_runner.go:164] Run: docker container inspect addons-334107 --format={{.State.Status}}
	I0127 11:18:48.075396  306694 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0127 11:18:48.075419  306694 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0127 11:18:48.075480  306694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-334107
	I0127 11:18:48.083107  306694 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:18:48.085811  306694 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0127 11:18:48.086918  306694 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0127 11:18:48.091354  306694 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 11:18:48.091381  306694 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 11:18:48.091451  306694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-334107
	I0127 11:18:48.104753  306694 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I0127 11:18:48.104830  306694 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0127 11:18:48.107922  306694 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0127 11:18:48.107946  306694 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0127 11:18:48.108014  306694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-334107
	I0127 11:18:48.118365  306694 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0127 11:18:48.130663  306694 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0127 11:18:48.135121  306694 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0127 11:18:48.135151  306694 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0127 11:18:48.135222  306694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-334107
	I0127 11:18:48.167171  306694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/addons-334107/id_rsa Username:docker}
	I0127 11:18:48.177627  306694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/addons-334107/id_rsa Username:docker}
	I0127 11:18:48.232247  306694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/addons-334107/id_rsa Username:docker}
	I0127 11:18:48.242661  306694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/addons-334107/id_rsa Username:docker}
	I0127 11:18:48.253171  306694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/addons-334107/id_rsa Username:docker}
	I0127 11:18:48.267184  306694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/addons-334107/id_rsa Username:docker}
	I0127 11:18:48.276590  306694 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0127 11:18:48.279984  306694 out.go:177]   - Using image docker.io/busybox:stable
	I0127 11:18:48.287326  306694 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0127 11:18:48.287353  306694 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0127 11:18:48.287418  306694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-334107
	I0127 11:18:48.300965  306694 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 11:18:48.300987  306694 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 11:18:48.301046  306694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-334107
	I0127 11:18:48.312248  306694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/addons-334107/id_rsa Username:docker}
	I0127 11:18:48.335252  306694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/addons-334107/id_rsa Username:docker}
	I0127 11:18:48.337631  306694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/addons-334107/id_rsa Username:docker}
	I0127 11:18:48.339324  306694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/addons-334107/id_rsa Username:docker}
	I0127 11:18:48.363298  306694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/addons-334107/id_rsa Username:docker}
	I0127 11:18:48.383200  306694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/addons-334107/id_rsa Username:docker}
	I0127 11:18:48.411653  306694 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0127 11:18:48.411675  306694 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
	I0127 11:18:48.474116  306694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0127 11:18:48.524022  306694 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0127 11:18:48.524094  306694 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0127 11:18:48.541127  306694 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 11:18:48.541197  306694 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0127 11:18:48.613788  306694 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 11:18:48.614049  306694 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0127 11:18:48.715923  306694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0127 11:18:48.729861  306694 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0127 11:18:48.729894  306694 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0127 11:18:48.733007  306694 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 11:18:48.733078  306694 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 11:18:48.761578  306694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0127 11:18:48.817560  306694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0127 11:18:48.838360  306694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 11:18:48.846391  306694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0127 11:18:48.866549  306694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 11:18:48.866967  306694 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0127 11:18:48.867018  306694 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0127 11:18:48.871279  306694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0127 11:18:48.885555  306694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0127 11:18:48.896703  306694 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0127 11:18:48.896782  306694 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0127 11:18:48.916071  306694 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0127 11:18:48.916149  306694 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0127 11:18:48.933055  306694 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 11:18:48.933134  306694 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 11:18:48.936835  306694 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0127 11:18:48.936910  306694 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0127 11:18:49.026234  306694 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0127 11:18:49.026310  306694 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0127 11:18:49.048361  306694 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0127 11:18:49.048444  306694 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0127 11:18:49.121213  306694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 11:18:49.145786  306694 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0127 11:18:49.145860  306694 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0127 11:18:49.158484  306694 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0127 11:18:49.158556  306694 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0127 11:18:49.205990  306694 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0127 11:18:49.206068  306694 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0127 11:18:49.237567  306694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0127 11:18:49.305416  306694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0127 11:18:49.340993  306694 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0127 11:18:49.341064  306694 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0127 11:18:49.427266  306694 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0127 11:18:49.427343  306694 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0127 11:18:49.554234  306694 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0127 11:18:49.554300  306694 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0127 11:18:49.617583  306694 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0127 11:18:49.617656  306694 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0127 11:18:49.687642  306694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0127 11:18:49.718293  306694 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0127 11:18:49.718365  306694 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0127 11:18:49.808805  306694 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0127 11:18:49.808865  306694 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0127 11:18:49.936216  306694 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0127 11:18:49.936244  306694 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0127 11:18:50.009110  306694 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0127 11:18:50.009138  306694 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0127 11:18:50.238566  306694 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0127 11:18:50.238627  306694 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0127 11:18:50.451212  306694 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0127 11:18:50.451286  306694 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0127 11:18:50.629920  306694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0127 11:18:53.091655  306694 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.617459113s)
	I0127 11:18:53.091730  306694 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.477645938s)
	I0127 11:18:53.091741  306694 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0127 11:18:53.092820  306694 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.478967763s)
	I0127 11:18:53.093578  306694 node_ready.go:35] waiting up to 6m0s for node "addons-334107" to be "Ready" ...
	I0127 11:18:53.093809  306694 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.377855291s)
	I0127 11:18:53.093862  306694 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.33222246s)
	I0127 11:18:53.093898  306694 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.276270551s)
	I0127 11:18:53.093956  306694 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.255530382s)
	I0127 11:18:53.094000  306694 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.247531625s)
	I0127 11:18:53.094031  306694 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.227414043s)
	I0127 11:18:53.625446  306694 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-334107" context rescaled to 1 replicas
	I0127 11:18:54.129396  306694 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.258035842s)
	I0127 11:18:54.129485  306694 addons.go:479] Verifying addon ingress=true in "addons-334107"
	I0127 11:18:54.129841  306694 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.244213967s)
	I0127 11:18:54.129957  306694 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.008672826s)
	I0127 11:18:54.130318  306694 addons.go:479] Verifying addon metrics-server=true in "addons-334107"
	I0127 11:18:54.129995  306694 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.89235875s)
	I0127 11:18:54.130335  306694 addons.go:479] Verifying addon registry=true in "addons-334107"
	I0127 11:18:54.130028  306694 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.824541103s)
	I0127 11:18:54.130117  306694 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.442401726s)
	W0127 11:18:54.130992  306694 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0127 11:18:54.131019  306694 retry.go:31] will retry after 338.953876ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0127 11:18:54.133855  306694 out.go:177] * Verifying registry addon...
	I0127 11:18:54.134027  306694 out.go:177] * Verifying ingress addon...
	I0127 11:18:54.134101  306694 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-334107 service yakd-dashboard -n yakd-dashboard
	
	I0127 11:18:54.138597  306694 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0127 11:18:54.139799  306694 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0127 11:18:54.156403  306694 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0127 11:18:54.156479  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:18:54.158368  306694 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0127 11:18:54.159744  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:18:54.379762  306694 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.749747829s)
	I0127 11:18:54.379843  306694 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-334107"
	I0127 11:18:54.382927  306694 out.go:177] * Verifying csi-hostpath-driver addon...
	I0127 11:18:54.386790  306694 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0127 11:18:54.395917  306694 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0127 11:18:54.395946  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:18:54.470225  306694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0127 11:18:54.644459  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:18:54.646336  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:18:54.892102  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:18:55.097116  306694 node_ready.go:53] node "addons-334107" has status "Ready":"False"
	I0127 11:18:55.142884  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:18:55.144130  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:18:55.390756  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:18:55.642591  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:18:55.643949  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:18:55.890423  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:18:56.143182  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:18:56.144271  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:18:56.390981  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:18:56.485870  306694 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0127 11:18:56.485960  306694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-334107
	I0127 11:18:56.503330  306694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/addons-334107/id_rsa Username:docker}
	I0127 11:18:56.608228  306694 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0127 11:18:56.627600  306694 addons.go:238] Setting addon gcp-auth=true in "addons-334107"
	I0127 11:18:56.627655  306694 host.go:66] Checking if "addons-334107" exists ...
	I0127 11:18:56.628111  306694 cli_runner.go:164] Run: docker container inspect addons-334107 --format={{.State.Status}}
	I0127 11:18:56.644714  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:18:56.645609  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:18:56.649289  306694 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0127 11:18:56.649356  306694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-334107
	I0127 11:18:56.667628  306694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/addons-334107/id_rsa Username:docker}
	I0127 11:18:56.895316  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:18:57.097209  306694 node_ready.go:53] node "addons-334107" has status "Ready":"False"
	I0127 11:18:57.132007  306694 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.661732626s)
	I0127 11:18:57.135310  306694 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0127 11:18:57.138134  306694 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0127 11:18:57.140992  306694 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0127 11:18:57.141058  306694 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0127 11:18:57.145277  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:18:57.146609  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:18:57.158882  306694 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0127 11:18:57.158907  306694 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0127 11:18:57.177721  306694 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0127 11:18:57.177746  306694 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0127 11:18:57.195973  306694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0127 11:18:57.391057  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:18:57.653648  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:18:57.657218  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:18:57.713873  306694 addons.go:479] Verifying addon gcp-auth=true in "addons-334107"
	I0127 11:18:57.716896  306694 out.go:177] * Verifying gcp-auth addon...
	I0127 11:18:57.720589  306694 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0127 11:18:57.753069  306694 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0127 11:18:57.753096  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:18:57.890552  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:18:58.142932  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:18:58.143219  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:18:58.224150  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:18:58.390495  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:18:58.642094  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:18:58.643984  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:18:58.724380  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:18:58.891202  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:18:59.141735  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:18:59.143467  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:18:59.223572  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:18:59.390860  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:18:59.596654  306694 node_ready.go:53] node "addons-334107" has status "Ready":"False"
	I0127 11:18:59.643215  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:18:59.644062  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:18:59.724800  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:18:59.890932  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:00.170374  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:00.171365  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:00.238514  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:00.392131  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:00.644040  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:00.646983  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:00.724553  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:00.890691  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:01.142502  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:01.144389  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:01.243755  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:01.390823  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:01.597044  306694 node_ready.go:53] node "addons-334107" has status "Ready":"False"
	I0127 11:19:01.642390  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:01.644249  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:01.724507  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:01.891417  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:02.142292  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:02.143397  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:02.224798  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:02.390630  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:02.642008  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:02.643977  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:02.724449  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:02.890928  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:03.143171  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:03.144729  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:03.224146  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:03.390191  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:03.642696  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:03.643572  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:03.723922  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:03.891166  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:04.096860  306694 node_ready.go:53] node "addons-334107" has status "Ready":"False"
	I0127 11:19:04.142539  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:04.143579  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:04.223679  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:04.392381  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:04.641948  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:04.644208  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:04.724776  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:04.890429  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:05.142409  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:05.143578  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:05.223705  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:05.391363  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:05.642059  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:05.645306  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:05.725415  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:05.890369  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:06.097026  306694 node_ready.go:53] node "addons-334107" has status "Ready":"False"
	I0127 11:19:06.142372  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:06.144143  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:06.224358  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:06.391237  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:06.643183  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:06.644041  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:06.724654  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:06.890473  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:07.142539  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:07.145333  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:07.224500  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:07.390506  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:07.642567  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:07.643917  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:07.724136  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:07.890377  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:08.097092  306694 node_ready.go:53] node "addons-334107" has status "Ready":"False"
	I0127 11:19:08.143258  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:08.144199  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:08.224687  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:08.390610  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:08.642178  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:08.645062  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:08.724252  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:08.890614  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:09.143544  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:09.145140  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:09.224561  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:09.390408  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:09.642666  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:09.644342  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:09.724836  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:09.890387  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:10.143191  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:10.144263  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:10.224682  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:10.390444  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:10.596903  306694 node_ready.go:53] node "addons-334107" has status "Ready":"False"
	I0127 11:19:10.642494  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:10.643777  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:10.724300  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:10.890837  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:11.142167  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:11.143757  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:11.224164  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:11.390416  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:11.643618  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:11.644064  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:11.724586  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:11.891139  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:12.144541  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:12.145728  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:12.223854  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:12.390809  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:12.597592  306694 node_ready.go:53] node "addons-334107" has status "Ready":"False"
	I0127 11:19:12.642352  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:12.644368  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:12.724901  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:12.891179  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:13.143116  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:13.144893  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:13.224330  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:13.390802  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:13.643950  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:13.644750  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:13.724581  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:13.890510  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:14.142985  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:14.144307  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:14.224527  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:14.390889  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:14.641538  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:14.644861  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:14.724263  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:14.890447  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:15.097205  306694 node_ready.go:53] node "addons-334107" has status "Ready":"False"
	I0127 11:19:15.143427  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:15.144203  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:15.224312  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:15.390970  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:15.644116  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:15.644495  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:15.724308  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:15.890630  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:16.143773  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:16.144775  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:16.224342  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:16.390446  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:16.644215  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:16.644502  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:16.726044  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:16.891116  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:17.097464  306694 node_ready.go:53] node "addons-334107" has status "Ready":"False"
	I0127 11:19:17.142469  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:17.144738  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:17.224073  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:17.390082  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:17.644363  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:17.648152  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:17.723895  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:17.890902  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:18.142137  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:18.143795  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:18.224086  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:18.390571  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:18.642709  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:18.643696  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:18.724018  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:18.890289  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:19.142607  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:19.145521  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:19.224751  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:19.390803  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:19.597081  306694 node_ready.go:53] node "addons-334107" has status "Ready":"False"
	I0127 11:19:19.641632  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:19.643302  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:19.724515  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:19.890870  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:20.142691  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:20.143745  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:20.224408  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:20.390408  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:20.642920  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:20.643701  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:20.724006  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:20.890487  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:21.142673  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:21.144531  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:21.223824  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:21.390327  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:21.597338  306694 node_ready.go:53] node "addons-334107" has status "Ready":"False"
	I0127 11:19:21.642814  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:21.644473  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:21.724028  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:21.891626  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:22.142094  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:22.143105  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:22.224180  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:22.390602  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:22.642486  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:22.644345  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:22.724607  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:22.890235  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:23.142892  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:23.144189  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:23.224454  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:23.390781  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:23.642205  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:23.644211  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:23.724810  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:23.890062  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:24.096767  306694 node_ready.go:53] node "addons-334107" has status "Ready":"False"
	I0127 11:19:24.143194  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:24.144394  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:24.224537  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:24.390822  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:24.641939  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:24.643885  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:24.724239  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:24.891446  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:25.141913  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:25.143741  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:25.224309  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:25.390609  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:25.642059  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:25.643228  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:25.724135  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:25.890283  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:26.097593  306694 node_ready.go:53] node "addons-334107" has status "Ready":"False"
	I0127 11:19:26.141843  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:26.142959  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:26.224568  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:26.390486  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:26.642660  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:26.643538  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:26.723781  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:26.889969  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:27.141795  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:27.143941  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:27.224116  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:27.390760  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:27.641600  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:27.644161  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:27.724664  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:27.890091  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:28.143104  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:28.144047  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:28.224801  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:28.390352  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:28.597081  306694 node_ready.go:53] node "addons-334107" has status "Ready":"False"
	I0127 11:19:28.643180  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:28.645276  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:28.724462  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:28.890612  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:29.143514  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:29.144267  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:29.224549  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:29.390743  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:29.642117  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:29.644554  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:29.723591  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:29.891154  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:30.141908  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:30.143718  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:30.223810  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:30.428260  306694 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0127 11:19:30.428367  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:30.597601  306694 node_ready.go:49] node "addons-334107" has status "Ready":"True"
	I0127 11:19:30.597676  306694 node_ready.go:38] duration metric: took 37.504068961s for node "addons-334107" to be "Ready" ...
	I0127 11:19:30.597700  306694 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:19:30.612154  306694 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-dqb5w" in "kube-system" namespace to be "Ready" ...
	I0127 11:19:30.723156  306694 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0127 11:19:30.723218  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:30.725771  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:30.767441  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:30.894587  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:31.151967  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:31.153200  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:31.248123  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:31.414950  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:31.644127  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:31.645109  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:31.724429  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:31.893040  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:32.121518  306694 pod_ready.go:93] pod "coredns-668d6bf9bc-dqb5w" in "kube-system" namespace has status "Ready":"True"
	I0127 11:19:32.121542  306694 pod_ready.go:82] duration metric: took 1.509313773s for pod "coredns-668d6bf9bc-dqb5w" in "kube-system" namespace to be "Ready" ...
	I0127 11:19:32.121576  306694 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-334107" in "kube-system" namespace to be "Ready" ...
	I0127 11:19:32.130976  306694 pod_ready.go:93] pod "etcd-addons-334107" in "kube-system" namespace has status "Ready":"True"
	I0127 11:19:32.131048  306694 pod_ready.go:82] duration metric: took 9.45477ms for pod "etcd-addons-334107" in "kube-system" namespace to be "Ready" ...
	I0127 11:19:32.131134  306694 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-334107" in "kube-system" namespace to be "Ready" ...
	I0127 11:19:32.137419  306694 pod_ready.go:93] pod "kube-apiserver-addons-334107" in "kube-system" namespace has status "Ready":"True"
	I0127 11:19:32.137492  306694 pod_ready.go:82] duration metric: took 6.330686ms for pod "kube-apiserver-addons-334107" in "kube-system" namespace to be "Ready" ...
	I0127 11:19:32.137518  306694 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-334107" in "kube-system" namespace to be "Ready" ...
	I0127 11:19:32.145157  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:32.145774  306694 pod_ready.go:93] pod "kube-controller-manager-addons-334107" in "kube-system" namespace has status "Ready":"True"
	I0127 11:19:32.145887  306694 pod_ready.go:82] duration metric: took 8.346712ms for pod "kube-controller-manager-addons-334107" in "kube-system" namespace to be "Ready" ...
	I0127 11:19:32.145933  306694 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qjrg8" in "kube-system" namespace to be "Ready" ...
	I0127 11:19:32.146704  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:32.199713  306694 pod_ready.go:93] pod "kube-proxy-qjrg8" in "kube-system" namespace has status "Ready":"True"
	I0127 11:19:32.199740  306694 pod_ready.go:82] duration metric: took 53.753006ms for pod "kube-proxy-qjrg8" in "kube-system" namespace to be "Ready" ...
	I0127 11:19:32.199753  306694 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-334107" in "kube-system" namespace to be "Ready" ...
	I0127 11:19:32.224310  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:32.391236  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:32.597454  306694 pod_ready.go:93] pod "kube-scheduler-addons-334107" in "kube-system" namespace has status "Ready":"True"
	I0127 11:19:32.597480  306694 pod_ready.go:82] duration metric: took 397.720576ms for pod "kube-scheduler-addons-334107" in "kube-system" namespace to be "Ready" ...
	I0127 11:19:32.597492  306694 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-7fbb699795-6x2xs" in "kube-system" namespace to be "Ready" ...
	I0127 11:19:32.643392  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:32.644468  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:32.724857  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:32.892172  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:33.144612  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:33.145034  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:33.225003  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:33.392535  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:33.645605  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:33.646343  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:33.725160  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:33.893344  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:34.155989  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:34.156912  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:34.249257  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:34.392262  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:34.604282  306694 pod_ready.go:103] pod "metrics-server-7fbb699795-6x2xs" in "kube-system" namespace has status "Ready":"False"
	I0127 11:19:34.646069  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:34.647399  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:34.725206  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:34.892388  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:35.144554  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:35.148633  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:35.224116  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:35.392107  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:35.645752  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:35.650173  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:35.730717  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:35.892345  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:36.148418  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:36.149230  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:36.225017  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:36.392438  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:36.605131  306694 pod_ready.go:103] pod "metrics-server-7fbb699795-6x2xs" in "kube-system" namespace has status "Ready":"False"
	I0127 11:19:36.647975  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:36.649533  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:36.724546  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:36.892417  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:37.145578  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:37.147633  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:37.224148  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:37.407551  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:37.650917  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:37.653869  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:37.724786  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:37.891882  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:38.151817  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:38.154032  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:38.250106  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:38.392995  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:38.605341  306694 pod_ready.go:103] pod "metrics-server-7fbb699795-6x2xs" in "kube-system" namespace has status "Ready":"False"
	I0127 11:19:38.647342  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:38.648340  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:38.725649  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:38.891788  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:39.144846  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:39.145783  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:39.224418  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:39.392252  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:39.642401  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:39.644047  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:39.725676  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:39.893447  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:40.145397  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:40.146413  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:40.224688  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:40.391778  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:40.645862  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:40.646794  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:40.745696  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:40.892754  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:41.104022  306694 pod_ready.go:103] pod "metrics-server-7fbb699795-6x2xs" in "kube-system" namespace has status "Ready":"False"
	I0127 11:19:41.144238  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:41.145076  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:41.224454  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:41.391506  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:41.643677  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:41.644880  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:41.724790  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:41.892362  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:42.152379  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:42.152873  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:42.225651  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:42.398840  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:42.645260  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:42.647246  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:42.724870  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:42.892474  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:43.105085  306694 pod_ready.go:103] pod "metrics-server-7fbb699795-6x2xs" in "kube-system" namespace has status "Ready":"False"
	I0127 11:19:43.143838  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:43.144897  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:43.225490  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:43.393168  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:43.643017  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:43.644809  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:43.730316  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:43.892266  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:44.144807  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:44.145743  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:44.246515  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:44.391907  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:44.642257  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:44.643455  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:44.724915  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:44.891336  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:45.107847  306694 pod_ready.go:103] pod "metrics-server-7fbb699795-6x2xs" in "kube-system" namespace has status "Ready":"False"
	I0127 11:19:45.143472  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:45.145315  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:45.239666  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:45.392134  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:45.648610  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:45.649656  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:45.724447  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:45.893054  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:46.145819  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:46.148901  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:46.224895  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:46.392343  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:46.643180  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:46.645181  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:46.724736  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:46.891377  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:47.144067  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:47.145988  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:47.229414  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:47.393407  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:47.605835  306694 pod_ready.go:103] pod "metrics-server-7fbb699795-6x2xs" in "kube-system" namespace has status "Ready":"False"
	I0127 11:19:47.646002  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:47.647481  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:47.725681  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:47.898051  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:48.145738  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:48.146677  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:48.232601  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:48.401157  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:48.646872  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:48.648476  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:48.724949  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:48.893247  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:49.144894  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:49.146173  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:49.224750  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:49.392387  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:49.642493  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:49.644393  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:49.724565  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:49.892640  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:50.103347  306694 pod_ready.go:103] pod "metrics-server-7fbb699795-6x2xs" in "kube-system" namespace has status "Ready":"False"
	I0127 11:19:50.142979  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:50.145166  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:50.228041  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:50.392305  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:50.604531  306694 pod_ready.go:93] pod "metrics-server-7fbb699795-6x2xs" in "kube-system" namespace has status "Ready":"True"
	I0127 11:19:50.604557  306694 pod_ready.go:82] duration metric: took 18.007057012s for pod "metrics-server-7fbb699795-6x2xs" in "kube-system" namespace to be "Ready" ...
	I0127 11:19:50.604569  306694 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-nk7kx" in "kube-system" namespace to be "Ready" ...
	I0127 11:19:50.641984  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:50.644884  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:50.724874  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:50.892966  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:51.143648  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:51.143928  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:51.224115  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:51.403545  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:51.645692  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:51.647011  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:51.724638  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:51.892019  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:52.145827  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:52.146660  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:52.224434  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:52.393060  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:52.611783  306694 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-nk7kx" in "kube-system" namespace has status "Ready":"False"
	I0127 11:19:52.644757  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:52.646017  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:52.724797  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:52.891377  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:53.150592  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:53.151983  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:53.224863  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:53.392839  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:53.652984  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:53.654219  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:53.725208  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:53.894827  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:54.146435  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:54.147709  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:54.224389  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:54.391978  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:54.643599  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:54.644581  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:54.724054  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:54.891738  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:55.112252  306694 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-nk7kx" in "kube-system" namespace has status "Ready":"False"
	I0127 11:19:55.143031  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:55.149711  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:55.224818  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:55.392213  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:55.648524  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:55.652312  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:55.725142  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:55.892693  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:56.143452  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:56.144786  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:56.224275  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:56.392270  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:56.642793  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:56.645007  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:56.724464  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:56.894169  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:57.144721  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:57.145967  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:57.224200  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:57.391796  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:57.610845  306694 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-nk7kx" in "kube-system" namespace has status "Ready":"False"
	I0127 11:19:57.643780  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:57.645775  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:57.724347  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:57.891836  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:58.144189  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:58.146498  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:58.225664  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:58.392235  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:58.648189  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:58.650517  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:58.724789  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:58.894825  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:59.144871  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:59.147514  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:59.224810  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:59.392519  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:19:59.615004  306694 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-nk7kx" in "kube-system" namespace has status "Ready":"False"
	I0127 11:19:59.649042  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:19:59.651968  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:19:59.725359  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:19:59.892996  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:00.161891  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:00.179182  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:20:00.259794  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:00.433153  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:00.680237  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:00.709124  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:20:00.741094  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:00.911912  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:01.153827  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:01.158753  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:20:01.224637  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:01.394235  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:01.646603  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:20:01.647600  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:01.724039  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:01.903386  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:02.111465  306694 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-nk7kx" in "kube-system" namespace has status "Ready":"False"
	I0127 11:20:02.144370  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:20:02.145547  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:02.226919  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:02.409923  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:02.647379  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:20:02.647509  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:02.725275  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:02.892955  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:03.144123  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:20:03.144972  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:03.224530  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:03.393992  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:03.645963  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:20:03.647308  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:03.745256  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:03.894657  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:04.142415  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:20:04.144947  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:04.226046  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:04.392304  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:04.610812  306694 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-nk7kx" in "kube-system" namespace has status "Ready":"False"
	I0127 11:20:04.642976  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:20:04.644622  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:04.724043  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:04.891749  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:05.143910  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:20:05.144923  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:05.227783  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:05.391648  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:05.644101  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:20:05.646588  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:05.724849  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:05.891905  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:06.146140  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:20:06.147029  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:06.225497  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:06.391586  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:06.611777  306694 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-nk7kx" in "kube-system" namespace has status "Ready":"False"
	I0127 11:20:06.644006  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:06.644767  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:20:06.727518  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:06.892455  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:07.144869  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:20:07.148269  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:07.226734  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:07.458454  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:07.644718  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:20:07.646169  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:07.725255  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:07.892209  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:08.144149  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:20:08.145995  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:08.224741  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:08.391873  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:08.616472  306694 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-nk7kx" in "kube-system" namespace has status "Ready":"False"
	I0127 11:20:08.642470  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:20:08.645470  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:08.724766  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:08.894535  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:09.145092  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:09.145839  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 11:20:09.224029  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:09.392639  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:09.643413  306694 kapi.go:107] duration metric: took 1m15.504815239s to wait for kubernetes.io/minikube-addons=registry ...
	I0127 11:20:09.644876  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:09.724202  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:09.891856  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:10.145682  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:10.224000  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:10.391931  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:10.645594  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:10.725312  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:10.891627  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:11.110590  306694 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-nk7kx" in "kube-system" namespace has status "Ready":"False"
	I0127 11:20:11.144287  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:11.224824  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:11.392686  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:11.646988  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:11.726625  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:11.914585  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:12.146089  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:12.235013  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:12.394251  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:12.644413  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:12.724360  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:12.891940  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:13.110942  306694 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-nk7kx" in "kube-system" namespace has status "Ready":"False"
	I0127 11:20:13.145512  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:13.223776  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:13.391858  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:13.644708  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:13.724142  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:13.892133  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:14.145189  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:14.228942  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:14.392518  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:14.645037  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:14.724848  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:14.892234  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:15.114659  306694 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-nk7kx" in "kube-system" namespace has status "Ready":"False"
	I0127 11:20:15.145113  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:15.225073  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:15.392698  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:15.645609  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:15.724533  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:15.902108  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:16.144934  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:16.224606  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:16.391903  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:16.645060  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:16.725152  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:16.892119  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:17.145033  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:17.224633  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:17.391528  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:17.611849  306694 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-nk7kx" in "kube-system" namespace has status "Ready":"False"
	I0127 11:20:17.644823  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:17.726727  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:17.906229  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:18.146807  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:18.224456  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:18.392254  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:18.647299  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:18.725322  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:18.892599  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:19.144734  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:19.224352  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:19.392269  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:19.614148  306694 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-nk7kx" in "kube-system" namespace has status "Ready":"False"
	I0127 11:20:19.645098  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:19.724859  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:19.891929  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:20.146472  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:20.225591  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:20.393036  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:20.645240  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:20.725151  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:20.892602  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:21.145807  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:21.227709  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:21.393152  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:21.648358  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:21.726184  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:21.893229  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:22.120622  306694 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-nk7kx" in "kube-system" namespace has status "Ready":"False"
	I0127 11:20:22.148599  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:22.225481  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:22.392948  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:22.644962  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:22.724435  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:22.892531  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:23.146437  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:23.224705  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:23.393224  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:23.644815  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:23.727763  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:23.893079  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:24.144875  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:24.224433  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:24.391513  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:24.612488  306694 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-nk7kx" in "kube-system" namespace has status "Ready":"False"
	I0127 11:20:24.644413  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:24.725671  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:24.891369  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:25.112302  306694 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-nk7kx" in "kube-system" namespace has status "Ready":"True"
	I0127 11:20:25.112342  306694 pod_ready.go:82] duration metric: took 34.507763386s for pod "nvidia-device-plugin-daemonset-nk7kx" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:25.112385  306694 pod_ready.go:39] duration metric: took 54.514640752s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:20:25.112408  306694 api_server.go:52] waiting for apiserver process to appear ...
	I0127 11:20:25.112441  306694 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:20:25.112527  306694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:20:25.147237  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:25.174384  306694 cri.go:89] found id: "d1e60237326dcc94aa1a6ecd1368b3444bbcb41fd0e358fa08b719dc9af26360"
	I0127 11:20:25.174464  306694 cri.go:89] found id: ""
	I0127 11:20:25.174487  306694 logs.go:282] 1 containers: [d1e60237326dcc94aa1a6ecd1368b3444bbcb41fd0e358fa08b719dc9af26360]
	I0127 11:20:25.174581  306694 ssh_runner.go:195] Run: which crictl
	I0127 11:20:25.182121  306694 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:20:25.182200  306694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:20:25.225476  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:25.234260  306694 cri.go:89] found id: "10b9240cdc359c52cb2439c80ee37e77fc169b1dd198a2e9f98b06dc3b2998ab"
	I0127 11:20:25.234298  306694 cri.go:89] found id: ""
	I0127 11:20:25.234307  306694 logs.go:282] 1 containers: [10b9240cdc359c52cb2439c80ee37e77fc169b1dd198a2e9f98b06dc3b2998ab]
	I0127 11:20:25.234374  306694 ssh_runner.go:195] Run: which crictl
	I0127 11:20:25.238098  306694 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:20:25.238176  306694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:20:25.298273  306694 cri.go:89] found id: "488bfe4b1889d9ff519dfbd782a09a35f0ed4d9c7fe784a97e28da8836f3dcef"
	I0127 11:20:25.298297  306694 cri.go:89] found id: ""
	I0127 11:20:25.298304  306694 logs.go:282] 1 containers: [488bfe4b1889d9ff519dfbd782a09a35f0ed4d9c7fe784a97e28da8836f3dcef]
	I0127 11:20:25.298372  306694 ssh_runner.go:195] Run: which crictl
	I0127 11:20:25.302176  306694 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:20:25.302287  306694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:20:25.345566  306694 cri.go:89] found id: "d29bd777f382cf1cda6e1917d765202500fdbd72432791962adb952b90f03304"
	I0127 11:20:25.345589  306694 cri.go:89] found id: ""
	I0127 11:20:25.345597  306694 logs.go:282] 1 containers: [d29bd777f382cf1cda6e1917d765202500fdbd72432791962adb952b90f03304]
	I0127 11:20:25.345681  306694 ssh_runner.go:195] Run: which crictl
	I0127 11:20:25.349376  306694 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:20:25.349490  306694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:20:25.392816  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:25.405415  306694 cri.go:89] found id: "c4dae62a94389a40c66b1730658e86bb19b14be2eb52f25e6e2f352fdf9f0123"
	I0127 11:20:25.405436  306694 cri.go:89] found id: ""
	I0127 11:20:25.405444  306694 logs.go:282] 1 containers: [c4dae62a94389a40c66b1730658e86bb19b14be2eb52f25e6e2f352fdf9f0123]
	I0127 11:20:25.405510  306694 ssh_runner.go:195] Run: which crictl
	I0127 11:20:25.411365  306694 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:20:25.411478  306694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:20:25.459798  306694 cri.go:89] found id: "e8ac37a6868a94409068ec9c3dccffd418adbc9520ecce56af733bb8ed0788a7"
	I0127 11:20:25.459830  306694 cri.go:89] found id: ""
	I0127 11:20:25.459839  306694 logs.go:282] 1 containers: [e8ac37a6868a94409068ec9c3dccffd418adbc9520ecce56af733bb8ed0788a7]
	I0127 11:20:25.459944  306694 ssh_runner.go:195] Run: which crictl
	I0127 11:20:25.464272  306694 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:20:25.464384  306694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:20:25.516446  306694 cri.go:89] found id: "e65d15eac042c1983268dac9e7b510a1644fd414fbad50ab664d3fc0bc63e806"
	I0127 11:20:25.516471  306694 cri.go:89] found id: ""
	I0127 11:20:25.516480  306694 logs.go:282] 1 containers: [e65d15eac042c1983268dac9e7b510a1644fd414fbad50ab664d3fc0bc63e806]
	I0127 11:20:25.516571  306694 ssh_runner.go:195] Run: which crictl
	I0127 11:20:25.520764  306694 logs.go:123] Gathering logs for kubelet ...
	I0127 11:20:25.520790  306694 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:20:25.639473  306694 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:20:25.639510  306694 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0127 11:20:25.646117  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:25.724881  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:25.893312  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:25.962270  306694 logs.go:123] Gathering logs for kube-scheduler [d29bd777f382cf1cda6e1917d765202500fdbd72432791962adb952b90f03304] ...
	I0127 11:20:25.962338  306694 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d29bd777f382cf1cda6e1917d765202500fdbd72432791962adb952b90f03304"
	I0127 11:20:26.105576  306694 logs.go:123] Gathering logs for kindnet [e65d15eac042c1983268dac9e7b510a1644fd414fbad50ab664d3fc0bc63e806] ...
	I0127 11:20:26.105631  306694 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e65d15eac042c1983268dac9e7b510a1644fd414fbad50ab664d3fc0bc63e806"
	I0127 11:20:26.148796  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:26.165512  306694 logs.go:123] Gathering logs for container status ...
	I0127 11:20:26.165605  306694 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:20:26.225832  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:26.265200  306694 logs.go:123] Gathering logs for dmesg ...
	I0127 11:20:26.265238  306694 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:20:26.312051  306694 logs.go:123] Gathering logs for kube-apiserver [d1e60237326dcc94aa1a6ecd1368b3444bbcb41fd0e358fa08b719dc9af26360] ...
	I0127 11:20:26.312088  306694 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1e60237326dcc94aa1a6ecd1368b3444bbcb41fd0e358fa08b719dc9af26360"
	I0127 11:20:26.391784  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:26.424126  306694 logs.go:123] Gathering logs for etcd [10b9240cdc359c52cb2439c80ee37e77fc169b1dd198a2e9f98b06dc3b2998ab] ...
	I0127 11:20:26.424180  306694 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10b9240cdc359c52cb2439c80ee37e77fc169b1dd198a2e9f98b06dc3b2998ab"
	I0127 11:20:26.506081  306694 logs.go:123] Gathering logs for coredns [488bfe4b1889d9ff519dfbd782a09a35f0ed4d9c7fe784a97e28da8836f3dcef] ...
	I0127 11:20:26.506118  306694 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 488bfe4b1889d9ff519dfbd782a09a35f0ed4d9c7fe784a97e28da8836f3dcef"
	I0127 11:20:26.566172  306694 logs.go:123] Gathering logs for kube-proxy [c4dae62a94389a40c66b1730658e86bb19b14be2eb52f25e6e2f352fdf9f0123] ...
	I0127 11:20:26.566201  306694 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4dae62a94389a40c66b1730658e86bb19b14be2eb52f25e6e2f352fdf9f0123"
	I0127 11:20:26.610948  306694 logs.go:123] Gathering logs for kube-controller-manager [e8ac37a6868a94409068ec9c3dccffd418adbc9520ecce56af733bb8ed0788a7] ...
	I0127 11:20:26.610977  306694 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8ac37a6868a94409068ec9c3dccffd418adbc9520ecce56af733bb8ed0788a7"
	I0127 11:20:26.645055  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:26.686824  306694 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:20:26.686866  306694 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:20:26.745414  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:26.892335  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:27.144298  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:27.224178  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:27.391851  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:27.644579  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:27.724863  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:27.892379  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:28.144988  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:28.224192  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:28.392565  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:28.645035  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:28.724815  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:28.894339  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:29.145062  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:29.224759  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:29.310096  306694 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:20:29.331470  306694 api_server.go:72] duration metric: took 1m41.691989299s to wait for apiserver process to appear ...
	I0127 11:20:29.331498  306694 api_server.go:88] waiting for apiserver healthz status ...
	I0127 11:20:29.331533  306694 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:20:29.331592  306694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:20:29.392755  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:29.409068  306694 cri.go:89] found id: "d1e60237326dcc94aa1a6ecd1368b3444bbcb41fd0e358fa08b719dc9af26360"
	I0127 11:20:29.409095  306694 cri.go:89] found id: ""
	I0127 11:20:29.409104  306694 logs.go:282] 1 containers: [d1e60237326dcc94aa1a6ecd1368b3444bbcb41fd0e358fa08b719dc9af26360]
	I0127 11:20:29.409160  306694 ssh_runner.go:195] Run: which crictl
	I0127 11:20:29.412801  306694 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:20:29.412880  306694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:20:29.451165  306694 cri.go:89] found id: "10b9240cdc359c52cb2439c80ee37e77fc169b1dd198a2e9f98b06dc3b2998ab"
	I0127 11:20:29.451192  306694 cri.go:89] found id: ""
	I0127 11:20:29.451201  306694 logs.go:282] 1 containers: [10b9240cdc359c52cb2439c80ee37e77fc169b1dd198a2e9f98b06dc3b2998ab]
	I0127 11:20:29.451259  306694 ssh_runner.go:195] Run: which crictl
	I0127 11:20:29.455271  306694 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:20:29.455346  306694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:20:29.495905  306694 cri.go:89] found id: "488bfe4b1889d9ff519dfbd782a09a35f0ed4d9c7fe784a97e28da8836f3dcef"
	I0127 11:20:29.495928  306694 cri.go:89] found id: ""
	I0127 11:20:29.495935  306694 logs.go:282] 1 containers: [488bfe4b1889d9ff519dfbd782a09a35f0ed4d9c7fe784a97e28da8836f3dcef]
	I0127 11:20:29.495993  306694 ssh_runner.go:195] Run: which crictl
	I0127 11:20:29.499610  306694 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:20:29.499683  306694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:20:29.540580  306694 cri.go:89] found id: "d29bd777f382cf1cda6e1917d765202500fdbd72432791962adb952b90f03304"
	I0127 11:20:29.540653  306694 cri.go:89] found id: ""
	I0127 11:20:29.540668  306694 logs.go:282] 1 containers: [d29bd777f382cf1cda6e1917d765202500fdbd72432791962adb952b90f03304]
	I0127 11:20:29.540728  306694 ssh_runner.go:195] Run: which crictl
	I0127 11:20:29.545176  306694 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:20:29.545309  306694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:20:29.592186  306694 cri.go:89] found id: "c4dae62a94389a40c66b1730658e86bb19b14be2eb52f25e6e2f352fdf9f0123"
	I0127 11:20:29.592208  306694 cri.go:89] found id: ""
	I0127 11:20:29.592216  306694 logs.go:282] 1 containers: [c4dae62a94389a40c66b1730658e86bb19b14be2eb52f25e6e2f352fdf9f0123]
	I0127 11:20:29.592274  306694 ssh_runner.go:195] Run: which crictl
	I0127 11:20:29.596173  306694 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:20:29.596242  306694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:20:29.635652  306694 cri.go:89] found id: "e8ac37a6868a94409068ec9c3dccffd418adbc9520ecce56af733bb8ed0788a7"
	I0127 11:20:29.635676  306694 cri.go:89] found id: ""
	I0127 11:20:29.635687  306694 logs.go:282] 1 containers: [e8ac37a6868a94409068ec9c3dccffd418adbc9520ecce56af733bb8ed0788a7]
	I0127 11:20:29.635748  306694 ssh_runner.go:195] Run: which crictl
	I0127 11:20:29.639418  306694 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:20:29.639492  306694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:20:29.645145  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:29.680076  306694 cri.go:89] found id: "e65d15eac042c1983268dac9e7b510a1644fd414fbad50ab664d3fc0bc63e806"
	I0127 11:20:29.680148  306694 cri.go:89] found id: ""
	I0127 11:20:29.680170  306694 logs.go:282] 1 containers: [e65d15eac042c1983268dac9e7b510a1644fd414fbad50ab664d3fc0bc63e806]
	I0127 11:20:29.680260  306694 ssh_runner.go:195] Run: which crictl
	I0127 11:20:29.685031  306694 logs.go:123] Gathering logs for kubelet ...
	I0127 11:20:29.685106  306694 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:20:29.731799  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:29.800357  306694 logs.go:123] Gathering logs for dmesg ...
	I0127 11:20:29.800392  306694 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:20:29.825251  306694 logs.go:123] Gathering logs for kube-scheduler [d29bd777f382cf1cda6e1917d765202500fdbd72432791962adb952b90f03304] ...
	I0127 11:20:29.825277  306694 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d29bd777f382cf1cda6e1917d765202500fdbd72432791962adb952b90f03304"
	I0127 11:20:29.892543  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:29.894920  306694 logs.go:123] Gathering logs for kube-proxy [c4dae62a94389a40c66b1730658e86bb19b14be2eb52f25e6e2f352fdf9f0123] ...
	I0127 11:20:29.894978  306694 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4dae62a94389a40c66b1730658e86bb19b14be2eb52f25e6e2f352fdf9f0123"
	I0127 11:20:30.084783  306694 logs.go:123] Gathering logs for container status ...
	I0127 11:20:30.084875  306694 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:20:30.152262  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:30.225115  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:30.248861  306694 logs.go:123] Gathering logs for kindnet [e65d15eac042c1983268dac9e7b510a1644fd414fbad50ab664d3fc0bc63e806] ...
	I0127 11:20:30.248936  306694 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e65d15eac042c1983268dac9e7b510a1644fd414fbad50ab664d3fc0bc63e806"
	I0127 11:20:30.318582  306694 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:20:30.318662  306694 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:20:30.391847  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:30.437001  306694 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:20:30.437042  306694 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0127 11:20:30.607494  306694 logs.go:123] Gathering logs for kube-apiserver [d1e60237326dcc94aa1a6ecd1368b3444bbcb41fd0e358fa08b719dc9af26360] ...
	I0127 11:20:30.607532  306694 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1e60237326dcc94aa1a6ecd1368b3444bbcb41fd0e358fa08b719dc9af26360"
	I0127 11:20:30.645378  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:30.725118  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:30.794751  306694 logs.go:123] Gathering logs for etcd [10b9240cdc359c52cb2439c80ee37e77fc169b1dd198a2e9f98b06dc3b2998ab] ...
	I0127 11:20:30.794789  306694 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10b9240cdc359c52cb2439c80ee37e77fc169b1dd198a2e9f98b06dc3b2998ab"
	I0127 11:20:30.900638  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:30.920374  306694 logs.go:123] Gathering logs for coredns [488bfe4b1889d9ff519dfbd782a09a35f0ed4d9c7fe784a97e28da8836f3dcef] ...
	I0127 11:20:30.920413  306694 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 488bfe4b1889d9ff519dfbd782a09a35f0ed4d9c7fe784a97e28da8836f3dcef"
	I0127 11:20:31.088930  306694 logs.go:123] Gathering logs for kube-controller-manager [e8ac37a6868a94409068ec9c3dccffd418adbc9520ecce56af733bb8ed0788a7] ...
	I0127 11:20:31.088969  306694 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8ac37a6868a94409068ec9c3dccffd418adbc9520ecce56af733bb8ed0788a7"
	I0127 11:20:31.150295  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:31.224583  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:31.391706  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:31.644837  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:31.730111  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:31.897458  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:32.149779  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:32.224569  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:32.391794  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:32.645947  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:32.724760  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:32.897058  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:33.145634  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:33.226992  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:33.391978  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:33.646246  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:33.705419  306694 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0127 11:20:33.714526  306694 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0127 11:20:33.718429  306694 api_server.go:141] control plane version: v1.32.1
	I0127 11:20:33.718460  306694 api_server.go:131] duration metric: took 4.386954062s to wait for apiserver health ...
	I0127 11:20:33.718470  306694 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 11:20:33.718493  306694 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 11:20:33.718553  306694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 11:20:33.725119  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:33.763212  306694 cri.go:89] found id: "d1e60237326dcc94aa1a6ecd1368b3444bbcb41fd0e358fa08b719dc9af26360"
	I0127 11:20:33.763235  306694 cri.go:89] found id: ""
	I0127 11:20:33.763244  306694 logs.go:282] 1 containers: [d1e60237326dcc94aa1a6ecd1368b3444bbcb41fd0e358fa08b719dc9af26360]
	I0127 11:20:33.763304  306694 ssh_runner.go:195] Run: which crictl
	I0127 11:20:33.767115  306694 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 11:20:33.767189  306694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 11:20:33.820870  306694 cri.go:89] found id: "10b9240cdc359c52cb2439c80ee37e77fc169b1dd198a2e9f98b06dc3b2998ab"
	I0127 11:20:33.820896  306694 cri.go:89] found id: ""
	I0127 11:20:33.820904  306694 logs.go:282] 1 containers: [10b9240cdc359c52cb2439c80ee37e77fc169b1dd198a2e9f98b06dc3b2998ab]
	I0127 11:20:33.820961  306694 ssh_runner.go:195] Run: which crictl
	I0127 11:20:33.825113  306694 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 11:20:33.825206  306694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 11:20:33.871270  306694 cri.go:89] found id: "488bfe4b1889d9ff519dfbd782a09a35f0ed4d9c7fe784a97e28da8836f3dcef"
	I0127 11:20:33.871293  306694 cri.go:89] found id: ""
	I0127 11:20:33.871302  306694 logs.go:282] 1 containers: [488bfe4b1889d9ff519dfbd782a09a35f0ed4d9c7fe784a97e28da8836f3dcef]
	I0127 11:20:33.871360  306694 ssh_runner.go:195] Run: which crictl
	I0127 11:20:33.875252  306694 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 11:20:33.875328  306694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 11:20:33.911272  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:33.946444  306694 cri.go:89] found id: "d29bd777f382cf1cda6e1917d765202500fdbd72432791962adb952b90f03304"
	I0127 11:20:33.946468  306694 cri.go:89] found id: ""
	I0127 11:20:33.946476  306694 logs.go:282] 1 containers: [d29bd777f382cf1cda6e1917d765202500fdbd72432791962adb952b90f03304]
	I0127 11:20:33.946531  306694 ssh_runner.go:195] Run: which crictl
	I0127 11:20:33.952799  306694 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 11:20:33.952876  306694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 11:20:34.010507  306694 cri.go:89] found id: "c4dae62a94389a40c66b1730658e86bb19b14be2eb52f25e6e2f352fdf9f0123"
	I0127 11:20:34.010531  306694 cri.go:89] found id: ""
	I0127 11:20:34.010540  306694 logs.go:282] 1 containers: [c4dae62a94389a40c66b1730658e86bb19b14be2eb52f25e6e2f352fdf9f0123]
	I0127 11:20:34.010612  306694 ssh_runner.go:195] Run: which crictl
	I0127 11:20:34.015202  306694 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 11:20:34.015286  306694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 11:20:34.077560  306694 cri.go:89] found id: "e8ac37a6868a94409068ec9c3dccffd418adbc9520ecce56af733bb8ed0788a7"
	I0127 11:20:34.077585  306694 cri.go:89] found id: ""
	I0127 11:20:34.077593  306694 logs.go:282] 1 containers: [e8ac37a6868a94409068ec9c3dccffd418adbc9520ecce56af733bb8ed0788a7]
	I0127 11:20:34.077655  306694 ssh_runner.go:195] Run: which crictl
	I0127 11:20:34.091661  306694 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 11:20:34.091738  306694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 11:20:34.149030  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:34.172356  306694 cri.go:89] found id: "e65d15eac042c1983268dac9e7b510a1644fd414fbad50ab664d3fc0bc63e806"
	I0127 11:20:34.172376  306694 cri.go:89] found id: ""
	I0127 11:20:34.172384  306694 logs.go:282] 1 containers: [e65d15eac042c1983268dac9e7b510a1644fd414fbad50ab664d3fc0bc63e806]
	I0127 11:20:34.172442  306694 ssh_runner.go:195] Run: which crictl
	I0127 11:20:34.176648  306694 logs.go:123] Gathering logs for describe nodes ...
	I0127 11:20:34.176672  306694 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0127 11:20:34.253144  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:34.301830  306694 logs.go:123] Gathering logs for kube-apiserver [d1e60237326dcc94aa1a6ecd1368b3444bbcb41fd0e358fa08b719dc9af26360] ...
	I0127 11:20:34.301869  306694 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1e60237326dcc94aa1a6ecd1368b3444bbcb41fd0e358fa08b719dc9af26360"
	I0127 11:20:34.397801  306694 logs.go:123] Gathering logs for kube-controller-manager [e8ac37a6868a94409068ec9c3dccffd418adbc9520ecce56af733bb8ed0788a7] ...
	I0127 11:20:34.400806  306694 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8ac37a6868a94409068ec9c3dccffd418adbc9520ecce56af733bb8ed0788a7"
	I0127 11:20:34.402767  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:34.479877  306694 logs.go:123] Gathering logs for kindnet [e65d15eac042c1983268dac9e7b510a1644fd414fbad50ab664d3fc0bc63e806] ...
	I0127 11:20:34.479968  306694 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e65d15eac042c1983268dac9e7b510a1644fd414fbad50ab664d3fc0bc63e806"
	I0127 11:20:34.545636  306694 logs.go:123] Gathering logs for CRI-O ...
	I0127 11:20:34.545715  306694 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 11:20:34.653455  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:34.655182  306694 logs.go:123] Gathering logs for kube-proxy [c4dae62a94389a40c66b1730658e86bb19b14be2eb52f25e6e2f352fdf9f0123] ...
	I0127 11:20:34.655368  306694 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4dae62a94389a40c66b1730658e86bb19b14be2eb52f25e6e2f352fdf9f0123"
	I0127 11:20:34.732279  306694 logs.go:123] Gathering logs for container status ...
	I0127 11:20:34.732306  306694 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 11:20:34.736054  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:34.829603  306694 logs.go:123] Gathering logs for kubelet ...
	I0127 11:20:34.829636  306694 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 11:20:34.895188  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:34.966990  306694 logs.go:123] Gathering logs for dmesg ...
	I0127 11:20:34.967036  306694 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 11:20:35.017712  306694 logs.go:123] Gathering logs for etcd [10b9240cdc359c52cb2439c80ee37e77fc169b1dd198a2e9f98b06dc3b2998ab] ...
	I0127 11:20:35.017743  306694 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10b9240cdc359c52cb2439c80ee37e77fc169b1dd198a2e9f98b06dc3b2998ab"
	I0127 11:20:35.107537  306694 logs.go:123] Gathering logs for coredns [488bfe4b1889d9ff519dfbd782a09a35f0ed4d9c7fe784a97e28da8836f3dcef] ...
	I0127 11:20:35.107619  306694 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 488bfe4b1889d9ff519dfbd782a09a35f0ed4d9c7fe784a97e28da8836f3dcef"
	I0127 11:20:35.145741  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:35.166117  306694 logs.go:123] Gathering logs for kube-scheduler [d29bd777f382cf1cda6e1917d765202500fdbd72432791962adb952b90f03304] ...
	I0127 11:20:35.166148  306694 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d29bd777f382cf1cda6e1917d765202500fdbd72432791962adb952b90f03304"
	I0127 11:20:35.225597  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:35.393106  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:35.644889  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:35.726486  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:35.896525  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:36.149655  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:36.247971  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:36.392946  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:36.643907  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:36.724268  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:36.891689  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:37.145175  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:37.225234  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:37.393514  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:37.647567  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:37.724091  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:37.761769  306694 system_pods.go:59] 18 kube-system pods found
	I0127 11:20:37.761810  306694 system_pods.go:61] "coredns-668d6bf9bc-dqb5w" [5807c1a3-c887-4c82-986a-204686a841d0] Running
	I0127 11:20:37.761818  306694 system_pods.go:61] "csi-hostpath-attacher-0" [7c932eb0-5955-4351-ac80-7ff157d8abb5] Running
	I0127 11:20:37.761823  306694 system_pods.go:61] "csi-hostpath-resizer-0" [564b2cfb-178b-48f0-9be6-0f482b0d0abb] Running
	I0127 11:20:37.761832  306694 system_pods.go:61] "csi-hostpathplugin-jh58g" [56a619e8-8993-4ad8-ba50-d83fce7f98b6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0127 11:20:37.761837  306694 system_pods.go:61] "etcd-addons-334107" [5d4d4d91-eb3d-4de6-a7dd-df41a9b0206b] Running
	I0127 11:20:37.761843  306694 system_pods.go:61] "kindnet-mgrzk" [144cea77-9612-4d33-a709-2c0ac3ceaf1f] Running
	I0127 11:20:37.761848  306694 system_pods.go:61] "kube-apiserver-addons-334107" [4ab971fe-c4d6-4acf-a145-94ae2c5e2cf6] Running
	I0127 11:20:37.761852  306694 system_pods.go:61] "kube-controller-manager-addons-334107" [537578ec-6944-44e1-9d8a-30a24121e69d] Running
	I0127 11:20:37.761858  306694 system_pods.go:61] "kube-ingress-dns-minikube" [682fdda0-d62b-4a4b-809b-432d27b66b09] Running
	I0127 11:20:37.761862  306694 system_pods.go:61] "kube-proxy-qjrg8" [7e88a76b-2dbb-4582-8e96-4478c1908e6e] Running
	I0127 11:20:37.761865  306694 system_pods.go:61] "kube-scheduler-addons-334107" [bddc385e-1732-40cd-ad9f-6a7df7023c76] Running
	I0127 11:20:37.761869  306694 system_pods.go:61] "metrics-server-7fbb699795-6x2xs" [4eef1a60-9197-4899-8aff-d30f6c7b06ec] Running
	I0127 11:20:37.761880  306694 system_pods.go:61] "nvidia-device-plugin-daemonset-nk7kx" [29324f78-b52b-4c83-ae33-af69c72c4c06] Running
	I0127 11:20:37.761884  306694 system_pods.go:61] "registry-6c88467877-w9657" [bca942c5-f096-4159-8437-b4ad70f2524a] Running
	I0127 11:20:37.761888  306694 system_pods.go:61] "registry-proxy-gcxqf" [63d96504-e7b4-42c0-b091-b2fcb073e611] Running
	I0127 11:20:37.761892  306694 system_pods.go:61] "snapshot-controller-68b874b76f-mk57b" [2c0c146e-9f93-4a77-8515-0be41dfa9c9a] Running
	I0127 11:20:37.761902  306694 system_pods.go:61] "snapshot-controller-68b874b76f-zddhk" [1fdd5db8-41c7-4633-b52f-8b86c96fdb15] Running
	I0127 11:20:37.761906  306694 system_pods.go:61] "storage-provisioner" [9d39271e-a897-4a83-9ad2-03b8b5ba0d2a] Running
	I0127 11:20:37.761914  306694 system_pods.go:74] duration metric: took 4.043437499s to wait for pod list to return data ...
	I0127 11:20:37.761926  306694 default_sa.go:34] waiting for default service account to be created ...
	I0127 11:20:37.764441  306694 default_sa.go:45] found service account: "default"
	I0127 11:20:37.764507  306694 default_sa.go:55] duration metric: took 2.557612ms for default service account to be created ...
	I0127 11:20:37.764532  306694 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 11:20:37.781915  306694 system_pods.go:87] 18 kube-system pods found
	I0127 11:20:37.785233  306694 system_pods.go:105] "coredns-668d6bf9bc-dqb5w" [5807c1a3-c887-4c82-986a-204686a841d0] Running
	I0127 11:20:37.785268  306694 system_pods.go:105] "csi-hostpath-attacher-0" [7c932eb0-5955-4351-ac80-7ff157d8abb5] Running
	I0127 11:20:37.785275  306694 system_pods.go:105] "csi-hostpath-resizer-0" [564b2cfb-178b-48f0-9be6-0f482b0d0abb] Running
	I0127 11:20:37.785285  306694 system_pods.go:105] "csi-hostpathplugin-jh58g" [56a619e8-8993-4ad8-ba50-d83fce7f98b6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0127 11:20:37.785292  306694 system_pods.go:105] "etcd-addons-334107" [5d4d4d91-eb3d-4de6-a7dd-df41a9b0206b] Running
	I0127 11:20:37.785299  306694 system_pods.go:105] "kindnet-mgrzk" [144cea77-9612-4d33-a709-2c0ac3ceaf1f] Running
	I0127 11:20:37.785304  306694 system_pods.go:105] "kube-apiserver-addons-334107" [4ab971fe-c4d6-4acf-a145-94ae2c5e2cf6] Running
	I0127 11:20:37.785309  306694 system_pods.go:105] "kube-controller-manager-addons-334107" [537578ec-6944-44e1-9d8a-30a24121e69d] Running
	I0127 11:20:37.785315  306694 system_pods.go:105] "kube-ingress-dns-minikube" [682fdda0-d62b-4a4b-809b-432d27b66b09] Running
	I0127 11:20:37.785325  306694 system_pods.go:105] "kube-proxy-qjrg8" [7e88a76b-2dbb-4582-8e96-4478c1908e6e] Running
	I0127 11:20:37.785336  306694 system_pods.go:105] "kube-scheduler-addons-334107" [bddc385e-1732-40cd-ad9f-6a7df7023c76] Running
	I0127 11:20:37.785346  306694 system_pods.go:105] "metrics-server-7fbb699795-6x2xs" [4eef1a60-9197-4899-8aff-d30f6c7b06ec] Running
	I0127 11:20:37.785352  306694 system_pods.go:105] "nvidia-device-plugin-daemonset-nk7kx" [29324f78-b52b-4c83-ae33-af69c72c4c06] Running
	I0127 11:20:37.785357  306694 system_pods.go:105] "registry-6c88467877-w9657" [bca942c5-f096-4159-8437-b4ad70f2524a] Running
	I0127 11:20:37.785364  306694 system_pods.go:105] "registry-proxy-gcxqf" [63d96504-e7b4-42c0-b091-b2fcb073e611] Running
	I0127 11:20:37.785369  306694 system_pods.go:105] "snapshot-controller-68b874b76f-mk57b" [2c0c146e-9f93-4a77-8515-0be41dfa9c9a] Running
	I0127 11:20:37.785377  306694 system_pods.go:105] "snapshot-controller-68b874b76f-zddhk" [1fdd5db8-41c7-4633-b52f-8b86c96fdb15] Running
	I0127 11:20:37.785382  306694 system_pods.go:105] "storage-provisioner" [9d39271e-a897-4a83-9ad2-03b8b5ba0d2a] Running
	I0127 11:20:37.785390  306694 system_pods.go:147] duration metric: took 20.839702ms to wait for k8s-apps to be running ...
	I0127 11:20:37.785400  306694 system_svc.go:44] waiting for kubelet service to be running ....
	I0127 11:20:37.785465  306694 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 11:20:37.818345  306694 system_svc.go:56] duration metric: took 32.929706ms WaitForService to wait for kubelet
	I0127 11:20:37.818378  306694 kubeadm.go:582] duration metric: took 1m50.17890129s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 11:20:37.818399  306694 node_conditions.go:102] verifying NodePressure condition ...
	I0127 11:20:37.822370  306694 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0127 11:20:37.822403  306694 node_conditions.go:123] node cpu capacity is 2
	I0127 11:20:37.822417  306694 node_conditions.go:105] duration metric: took 4.011933ms to run NodePressure ...
	I0127 11:20:37.822429  306694 start.go:241] waiting for startup goroutines ...
	I0127 11:20:37.892419  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:38.146178  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:38.225863  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:38.393159  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:38.645148  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:38.725422  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:38.891907  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:39.145510  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:39.224065  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:39.392171  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:39.645332  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:39.724728  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:39.892360  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:40.146210  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:40.225595  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:40.392093  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:40.644958  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:40.725311  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:40.892860  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:41.144959  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:41.224755  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:41.391591  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:41.646356  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:41.724489  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:41.893238  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:42.148260  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:42.249108  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:42.399660  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:42.645022  306694 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 11:20:42.725399  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:42.901813  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:43.145192  306694 kapi.go:107] duration metric: took 1m49.005388853s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0127 11:20:43.224587  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:43.391757  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:43.724807  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:43.891693  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:44.224904  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:44.392007  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:44.724562  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:44.891808  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:45.226778  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:45.393146  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:45.725857  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:45.892080  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:46.225482  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:46.392569  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:46.725068  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:46.894620  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:47.224814  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:47.392325  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:47.724475  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 11:20:47.894191  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:48.224295  306694 kapi.go:107] duration metric: took 1m50.503708254s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0127 11:20:48.227427  306694 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-334107 cluster.
	I0127 11:20:48.230274  306694 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0127 11:20:48.233283  306694 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0127 11:20:48.392867  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:48.891997  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:49.394178  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:49.899854  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:50.392143  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:50.894221  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:51.392067  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:51.892432  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:52.392770  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:52.892041  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:53.392117  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:53.891989  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:54.392511  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:54.891608  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:55.394288  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:55.894630  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:56.392238  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:56.892051  306694 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 11:20:57.392275  306694 kapi.go:107] duration metric: took 2m3.00548103s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0127 11:20:57.395366  306694 out.go:177] * Enabled addons: inspektor-gadget, amd-gpu-device-plugin, nvidia-device-plugin, cloud-spanner, storage-provisioner, ingress-dns, default-storageclass, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0127 11:20:57.398231  306694 addons.go:514] duration metric: took 2m9.757569803s for enable addons: enabled=[inspektor-gadget amd-gpu-device-plugin nvidia-device-plugin cloud-spanner storage-provisioner ingress-dns default-storageclass metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0127 11:20:57.398277  306694 start.go:246] waiting for cluster config update ...
	I0127 11:20:57.398297  306694 start.go:255] writing updated cluster config ...
	I0127 11:20:57.398590  306694 ssh_runner.go:195] Run: rm -f paused
	I0127 11:20:57.812383  306694 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0127 11:20:57.815603  306694 out.go:177] * Done! kubectl is now configured to use "addons-334107" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 27 11:23:43 addons-334107 crio[972]: time="2025-01-27 11:23:43.405531281Z" level=info msg="Removed pod sandbox: b1614cf99db15b896eb76d5bf14b2a9f5797aa64c079b80c7cd91d6d5b3b12bf" id=6e7aaebc-062a-4e28-8c89-50f1dc33a217 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jan 27 11:25:07 addons-334107 crio[972]: time="2025-01-27 11:25:07.972012270Z" level=info msg="Running pod sandbox: default/hello-world-app-7d9564db4-vvfpk/POD" id=f1c8de6d-ae9e-404c-93a2-cd930fd1ebec name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 27 11:25:07 addons-334107 crio[972]: time="2025-01-27 11:25:07.972083490Z" level=warning msg="Allowed annotations are specified for workload []"
	Jan 27 11:25:08 addons-334107 crio[972]: time="2025-01-27 11:25:08.034591212Z" level=info msg="Got pod network &{Name:hello-world-app-7d9564db4-vvfpk Namespace:default ID:c926312c644aa16e6c9d56266bc40a9bade67c71d5b7b7d2c599f33412b8a7b3 UID:c6c832d2-b15c-4587-9f93-8da1ac799842 NetNS:/var/run/netns/995dd3e2-0b57-4e02-aa38-6cd17ceafc31 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jan 27 11:25:08 addons-334107 crio[972]: time="2025-01-27 11:25:08.034646046Z" level=info msg="Adding pod default_hello-world-app-7d9564db4-vvfpk to CNI network \"kindnet\" (type=ptp)"
	Jan 27 11:25:08 addons-334107 crio[972]: time="2025-01-27 11:25:08.052129829Z" level=info msg="Got pod network &{Name:hello-world-app-7d9564db4-vvfpk Namespace:default ID:c926312c644aa16e6c9d56266bc40a9bade67c71d5b7b7d2c599f33412b8a7b3 UID:c6c832d2-b15c-4587-9f93-8da1ac799842 NetNS:/var/run/netns/995dd3e2-0b57-4e02-aa38-6cd17ceafc31 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jan 27 11:25:08 addons-334107 crio[972]: time="2025-01-27 11:25:08.052305344Z" level=info msg="Checking pod default_hello-world-app-7d9564db4-vvfpk for CNI network kindnet (type=ptp)"
	Jan 27 11:25:08 addons-334107 crio[972]: time="2025-01-27 11:25:08.055741749Z" level=info msg="Ran pod sandbox c926312c644aa16e6c9d56266bc40a9bade67c71d5b7b7d2c599f33412b8a7b3 with infra container: default/hello-world-app-7d9564db4-vvfpk/POD" id=f1c8de6d-ae9e-404c-93a2-cd930fd1ebec name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 27 11:25:08 addons-334107 crio[972]: time="2025-01-27 11:25:08.057067899Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=06345210-60e8-437c-a062-82a00a88560e name=/runtime.v1.ImageService/ImageStatus
	Jan 27 11:25:08 addons-334107 crio[972]: time="2025-01-27 11:25:08.057300932Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=06345210-60e8-437c-a062-82a00a88560e name=/runtime.v1.ImageService/ImageStatus
	Jan 27 11:25:08 addons-334107 crio[972]: time="2025-01-27 11:25:08.059462781Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=8d0cc6b7-3763-46db-a10d-33c2c3ec95c8 name=/runtime.v1.ImageService/PullImage
	Jan 27 11:25:08 addons-334107 crio[972]: time="2025-01-27 11:25:08.064352268Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Jan 27 11:25:08 addons-334107 crio[972]: time="2025-01-27 11:25:08.331724234Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Jan 27 11:25:09 addons-334107 crio[972]: time="2025-01-27 11:25:09.118708333Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6" id=8d0cc6b7-3763-46db-a10d-33c2c3ec95c8 name=/runtime.v1.ImageService/PullImage
	Jan 27 11:25:09 addons-334107 crio[972]: time="2025-01-27 11:25:09.120310371Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=75bea10a-5abc-4f40-af3a-1207737d554a name=/runtime.v1.ImageService/ImageStatus
	Jan 27 11:25:09 addons-334107 crio[972]: time="2025-01-27 11:25:09.121031674Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=75bea10a-5abc-4f40-af3a-1207737d554a name=/runtime.v1.ImageService/ImageStatus
	Jan 27 11:25:09 addons-334107 crio[972]: time="2025-01-27 11:25:09.123876154Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=6cae1524-6e41-4374-a225-1a91fb34f929 name=/runtime.v1.ImageService/ImageStatus
	Jan 27 11:25:09 addons-334107 crio[972]: time="2025-01-27 11:25:09.124578150Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=6cae1524-6e41-4374-a225-1a91fb34f929 name=/runtime.v1.ImageService/ImageStatus
	Jan 27 11:25:09 addons-334107 crio[972]: time="2025-01-27 11:25:09.125405955Z" level=info msg="Creating container: default/hello-world-app-7d9564db4-vvfpk/hello-world-app" id=e0f47264-7703-446b-a786-7ed8bf382a57 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 27 11:25:09 addons-334107 crio[972]: time="2025-01-27 11:25:09.125507231Z" level=warning msg="Allowed annotations are specified for workload []"
	Jan 27 11:25:09 addons-334107 crio[972]: time="2025-01-27 11:25:09.155168801Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/e9a06058efa85e97048d6e7df9748886c47bc52d518a61ee958d2bf3d9a762ad/merged/etc/passwd: no such file or directory"
	Jan 27 11:25:09 addons-334107 crio[972]: time="2025-01-27 11:25:09.155218236Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/e9a06058efa85e97048d6e7df9748886c47bc52d518a61ee958d2bf3d9a762ad/merged/etc/group: no such file or directory"
	Jan 27 11:25:09 addons-334107 crio[972]: time="2025-01-27 11:25:09.220384125Z" level=info msg="Created container fa8cab2f75d607875e5fa0a784063c921a77e05c4ced8311b841e746c2f432e6: default/hello-world-app-7d9564db4-vvfpk/hello-world-app" id=e0f47264-7703-446b-a786-7ed8bf382a57 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 27 11:25:09 addons-334107 crio[972]: time="2025-01-27 11:25:09.223810380Z" level=info msg="Starting container: fa8cab2f75d607875e5fa0a784063c921a77e05c4ced8311b841e746c2f432e6" id=c8988ae2-4317-4ecb-b629-9edb4d6c1184 name=/runtime.v1.RuntimeService/StartContainer
	Jan 27 11:25:09 addons-334107 crio[972]: time="2025-01-27 11:25:09.234395080Z" level=info msg="Started container" PID=9298 containerID=fa8cab2f75d607875e5fa0a784063c921a77e05c4ced8311b841e746c2f432e6 description=default/hello-world-app-7d9564db4-vvfpk/hello-world-app id=c8988ae2-4317-4ecb-b629-9edb4d6c1184 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c926312c644aa16e6c9d56266bc40a9bade67c71d5b7b7d2c599f33412b8a7b3
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	fa8cab2f75d60       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        Less than a second ago   Running             hello-world-app           0                   c926312c644aa       hello-world-app-7d9564db4-vvfpk
	a04bc0c3f9ed0       docker.io/library/nginx@sha256:4338a8ba9b9962d07e30e7ff4bbf27d62ee7523deb7205e8f0912169f1bbac10                              2 minutes ago            Running             nginx                     0                   60e9b0494f4bd       nginx
	4b8925e0d29d2       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          4 minutes ago            Running             busybox                   0                   3f3a82062dc54       busybox
	14c5d8e0b1862       registry.k8s.io/ingress-nginx/controller@sha256:787a5408fa511266888b2e765f9666bee67d9bf2518a6b7cfd4ab6cc01c22eee             4 minutes ago            Running             controller                0                   d4fbbe25bf381       ingress-nginx-controller-56d7c84fd4-xrclb
	4594eb9c71ae6       d54655ed3a8543a162b688a24bf969ee1a28d906b8ccb30188059247efdae234                                                             4 minutes ago            Exited              patch                     3                   210c5b9de3c94       ingress-nginx-admission-patch-z4mkd
	ba2e78bf1c9ab       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:0550b75a965592f1dde3fbeaa98f67a1e10c5a086bcd69a29054cc4edcb56771   4 minutes ago            Exited              create                    0                   16b565983b504       ingress-nginx-admission-create-692jz
	59f67619505db       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             5 minutes ago            Running             minikube-ingress-dns      0                   b3373cc4e84d2       kube-ingress-dns-minikube
	a4bcc12650a77       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             5 minutes ago            Running             storage-provisioner       0                   37fd48896c760       storage-provisioner
	488bfe4b1889d       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                             5 minutes ago            Running             coredns                   0                   8d468da2db2e0       coredns-668d6bf9bc-dqb5w
	e65d15eac042c       2be0bcf609c6573ee83e676c747f31bda661ab2d4e039c51839e38fd258d2903                                                             6 minutes ago            Running             kindnet-cni               0                   52bff016ca9ce       kindnet-mgrzk
	c4dae62a94389       e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0                                                             6 minutes ago            Running             kube-proxy                0                   82123bdfd5bc0       kube-proxy-qjrg8
	10b9240cdc359       7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82                                                             6 minutes ago            Running             etcd                      0                   da56c83553392       etcd-addons-334107
	d29bd777f382c       ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c                                                             6 minutes ago            Running             kube-scheduler            0                   21159d7411636       kube-scheduler-addons-334107
	d1e60237326dc       265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19                                                             6 minutes ago            Running             kube-apiserver            0                   ce101f80413e4       kube-apiserver-addons-334107
	e8ac37a6868a9       2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13                                                             6 minutes ago            Running             kube-controller-manager   0                   d700d1475f56f       kube-controller-manager-addons-334107
	
	
	==> coredns [488bfe4b1889d9ff519dfbd782a09a35f0ed4d9c7fe784a97e28da8836f3dcef] <==
	[INFO] 10.244.0.10:43012 - 9546 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002411873s
	[INFO] 10.244.0.10:43012 - 33520 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000148045s
	[INFO] 10.244.0.10:43012 - 26124 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000096582s
	[INFO] 10.244.0.10:46548 - 40741 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000143401s
	[INFO] 10.244.0.10:46548 - 40962 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000119409s
	[INFO] 10.244.0.10:56252 - 8828 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000101489s
	[INFO] 10.244.0.10:56252 - 8623 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000078269s
	[INFO] 10.244.0.10:40724 - 39143 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000093342s
	[INFO] 10.244.0.10:40724 - 38707 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000084873s
	[INFO] 10.244.0.10:47887 - 20994 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002676348s
	[INFO] 10.244.0.10:47887 - 21463 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.003185934s
	[INFO] 10.244.0.10:39708 - 25660 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00013362s
	[INFO] 10.244.0.10:39708 - 25496 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000155175s
	[INFO] 10.244.0.21:55307 - 50755 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000209156s
	[INFO] 10.244.0.21:41280 - 12699 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000755871s
	[INFO] 10.244.0.21:51061 - 8636 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000166268s
	[INFO] 10.244.0.21:34796 - 2628 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000138937s
	[INFO] 10.244.0.21:40695 - 15589 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000102047s
	[INFO] 10.244.0.21:59608 - 2895 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000133497s
	[INFO] 10.244.0.21:45347 - 60223 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002086155s
	[INFO] 10.244.0.21:56816 - 7520 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001696471s
	[INFO] 10.244.0.21:42678 - 64145 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002087009s
	[INFO] 10.244.0.21:43219 - 48503 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001931333s
	[INFO] 10.244.0.24:57040 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000247261s
	[INFO] 10.244.0.24:53521 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000146757s
	
	
	==> describe nodes <==
	Name:               addons-334107
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-334107
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=35c230aa12d4986001aef5f6e29069f3bc5493aa
	                    minikube.k8s.io/name=addons-334107
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T11_18_43_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-334107
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 11:18:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-334107
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 11:24:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 11:23:18 +0000   Mon, 27 Jan 2025 11:18:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 11:23:18 +0000   Mon, 27 Jan 2025 11:18:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 11:23:18 +0000   Mon, 27 Jan 2025 11:18:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 11:23:18 +0000   Mon, 27 Jan 2025 11:19:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-334107
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 0559c8877dd847edb467952a2ebfc461
	  System UUID:                3addd49e-9c97-41b9-9c81-8781244e0827
	  Boot ID:                    dd59411c-5b67-4eb9-9e59-86d920ad153c
	  Kernel Version:             5.15.0-1075-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m11s
	  default                     hello-world-app-7d9564db4-vvfpk              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  ingress-nginx               ingress-nginx-controller-56d7c84fd4-xrclb    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         6m16s
	  kube-system                 coredns-668d6bf9bc-dqb5w                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m22s
	  kube-system                 etcd-addons-334107                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6m29s
	  kube-system                 kindnet-mgrzk                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6m22s
	  kube-system                 kube-apiserver-addons-334107                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m26s
	  kube-system                 kube-controller-manager-addons-334107        200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m26s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m17s
	  kube-system                 kube-proxy-qjrg8                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m22s
	  kube-system                 kube-scheduler-addons-334107                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m26s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             310Mi (3%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 6m17s                  kube-proxy       
	  Normal   Starting                 6m34s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m34s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  6m34s (x7 over 6m34s)  kubelet          Node addons-334107 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m34s (x6 over 6m34s)  kubelet          Node addons-334107 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m34s (x6 over 6m34s)  kubelet          Node addons-334107 status is now: NodeHasSufficientPID
	  Normal   Starting                 6m27s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m27s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  6m26s                  kubelet          Node addons-334107 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m26s                  kubelet          Node addons-334107 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m26s                  kubelet          Node addons-334107 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m23s                  node-controller  Node addons-334107 event: Registered Node addons-334107 in Controller
	  Normal   NodeReady                5m39s                  kubelet          Node addons-334107 status is now: NodeReady
	
	
	==> dmesg <==
	[Jan27 09:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.016392] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.510457] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035819] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.744594] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +5.522184] kauditd_printk_skb: 36 callbacks suppressed
	[Jan27 10:13] hrtimer: interrupt took 48937752 ns
	[Jan27 10:45] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [10b9240cdc359c52cb2439c80ee37e77fc169b1dd198a2e9f98b06dc3b2998ab] <==
	{"level":"info","ts":"2025-01-27T11:18:35.895283Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-01-27T11:18:35.895393Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-01-27T11:18:35.891794Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-01-27T11:18:35.895489Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-01-27T11:18:36.831106Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2025-01-27T11:18:36.831226Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2025-01-27T11:18:36.831303Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2025-01-27T11:18:36.831348Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2025-01-27T11:18:36.831381Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2025-01-27T11:18:36.831416Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2025-01-27T11:18:36.831450Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2025-01-27T11:18:36.835190Z","caller":"etcdserver/server.go:2651","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T11:18:36.839321Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-334107 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2025-01-27T11:18:36.843080Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-27T11:18:36.843184Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-01-27T11:18:36.843241Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-01-27T11:18:36.843124Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T11:18:36.843378Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T11:18:36.843432Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T11:18:36.843150Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-27T11:18:36.843875Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-27T11:18:36.843982Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-27T11:18:36.844711Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-01-27T11:18:36.844810Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2025-01-27T11:18:50.813973Z","caller":"traceutil/trace.go:171","msg":"trace[1329669032] transaction","detail":"{read_only:false; response_revision:373; number_of_response:1; }","duration":"146.689665ms","start":"2025-01-27T11:18:50.667126Z","end":"2025-01-27T11:18:50.813815Z","steps":["trace[1329669032] 'process raft request'  (duration: 112.213906ms)","trace[1329669032] 'compare'  (duration: 12.643229ms)"],"step_count":2}
	
	
	==> kernel <==
	 11:25:09 up  2:07,  0 users,  load average: 1.12, 1.60, 2.31
	Linux addons-334107 5.15.0-1075-aws #82~20.04.1-Ubuntu SMP Thu Dec 19 05:23:06 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [e65d15eac042c1983268dac9e7b510a1644fd414fbad50ab664d3fc0bc63e806] <==
	I0127 11:23:09.828881       1 main.go:301] handling current node
	I0127 11:23:19.821214       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0127 11:23:19.821325       1 main.go:301] handling current node
	I0127 11:23:29.827183       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0127 11:23:29.827220       1 main.go:301] handling current node
	I0127 11:23:39.830108       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0127 11:23:39.830142       1 main.go:301] handling current node
	I0127 11:23:49.821192       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0127 11:23:49.821228       1 main.go:301] handling current node
	I0127 11:23:59.827413       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0127 11:23:59.827450       1 main.go:301] handling current node
	I0127 11:24:09.830232       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0127 11:24:09.830265       1 main.go:301] handling current node
	I0127 11:24:19.827569       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0127 11:24:19.827604       1 main.go:301] handling current node
	I0127 11:24:29.827132       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0127 11:24:29.827258       1 main.go:301] handling current node
	I0127 11:24:39.829514       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0127 11:24:39.829549       1 main.go:301] handling current node
	I0127 11:24:49.820701       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0127 11:24:49.820746       1 main.go:301] handling current node
	I0127 11:24:59.827161       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0127 11:24:59.827295       1 main.go:301] handling current node
	I0127 11:25:09.820678       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0127 11:25:09.820717       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d1e60237326dcc94aa1a6ecd1368b3444bbcb41fd0e358fa08b719dc9af26360] <==
	E0127 11:21:08.885821       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:55326: use of closed network connection
	E0127 11:21:09.148477       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:55354: use of closed network connection
	E0127 11:21:09.293752       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:55364: use of closed network connection
	I0127 11:21:18.634758       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.109.102.142"}
	E0127 11:22:06.431278       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0127 11:22:25.178266       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0127 11:22:40.642611       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0127 11:22:41.774543       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0127 11:22:46.229705       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0127 11:22:46.580957       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.236.62"}
	E0127 11:22:47.662804       1 watch.go:278] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I0127 11:22:49.006100       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0127 11:22:49.006165       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0127 11:22:49.044041       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0127 11:22:49.044097       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0127 11:22:49.120866       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0127 11:22:49.120909       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0127 11:22:49.175846       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0127 11:22:49.175890       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	E0127 11:22:49.250594       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"snapshot-controller\" not found]"
	W0127 11:22:50.123367       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0127 11:22:50.176851       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0127 11:22:50.275715       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0127 11:22:51.275165       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0127 11:25:07.905266       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.106.151.232"}
	
	
	==> kube-controller-manager [e8ac37a6868a94409068ec9c3dccffd418adbc9520ecce56af733bb8ed0788a7] <==
	E0127 11:23:59.900598       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
	W0127 11:23:59.901572       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0127 11:23:59.901612       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0127 11:24:42.622574       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0127 11:24:42.623717       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0127 11:24:42.624768       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0127 11:24:42.624809       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0127 11:24:42.774020       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0127 11:24:42.775171       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
	W0127 11:24:42.776167       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0127 11:24:42.776205       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0127 11:24:51.324886       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0127 11:24:51.326007       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotclasses"
	W0127 11:24:51.327094       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0127 11:24:51.327134       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0127 11:24:54.372640       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0127 11:24:54.373730       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
	W0127 11:24:54.374615       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0127 11:24:54.374653       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0127 11:25:07.681153       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="44.713876ms"
	I0127 11:25:07.694878       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="13.601291ms"
	I0127 11:25:07.695310       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="49.673µs"
	I0127 11:25:07.699852       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="52.357µs"
	I0127 11:25:10.051852       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="21.636969ms"
	I0127 11:25:10.051948       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="56.271µs"
	
	
	==> kube-proxy [c4dae62a94389a40c66b1730658e86bb19b14be2eb52f25e6e2f352fdf9f0123] <==
	I0127 11:18:51.524315       1 server_linux.go:66] "Using iptables proxy"
	I0127 11:18:52.147186       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0127 11:18:52.147393       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 11:18:52.742613       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0127 11:18:52.742752       1 server_linux.go:170] "Using iptables Proxier"
	I0127 11:18:52.747131       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 11:18:52.747546       1 server.go:497] "Version info" version="v1.32.1"
	I0127 11:18:52.747729       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 11:18:52.749115       1 config.go:199] "Starting service config controller"
	I0127 11:18:52.749197       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 11:18:52.749262       1 config.go:105] "Starting endpoint slice config controller"
	I0127 11:18:52.749295       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 11:18:52.749830       1 config.go:329] "Starting node config controller"
	I0127 11:18:52.749891       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 11:18:52.883823       1 shared_informer.go:320] Caches are synced for node config
	I0127 11:18:52.883860       1 shared_informer.go:320] Caches are synced for service config
	I0127 11:18:52.883870       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [d29bd777f382cf1cda6e1917d765202500fdbd72432791962adb952b90f03304] <==
	W0127 11:18:40.113700       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0127 11:18:40.113776       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 11:18:40.113903       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0127 11:18:40.113958       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 11:18:40.991363       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0127 11:18:40.991488       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 11:18:41.014242       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0127 11:18:41.014284       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 11:18:41.031024       1 reflector.go:569] runtime/asm_arm64.s:1223: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0127 11:18:41.031178       1 reflector.go:166] "Unhandled Error" err="runtime/asm_arm64.s:1223: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0127 11:18:41.072020       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0127 11:18:41.072062       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 11:18:41.142442       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0127 11:18:41.142582       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 11:18:41.167299       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0127 11:18:41.167406       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 11:18:41.170020       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0127 11:18:41.170059       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0127 11:18:41.191714       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0127 11:18:41.191841       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 11:18:41.255329       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0127 11:18:41.255452       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0127 11:18:41.294832       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0127 11:18:41.294970       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 11:18:43.388401       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 27 11:24:43 addons-334107 kubelet[1494]: E0127 11:24:43.019586    1494 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/40a6fbb17686b29c331c9d3e8d926ef626f96ec3ae7dfa98e23c360f0aea5656/diff" to get inode usage: stat /var/lib/containers/storage/overlay/40a6fbb17686b29c331c9d3e8d926ef626f96ec3ae7dfa98e23c360f0aea5656/diff: no such file or directory, extraDiskErr: <nil>
	Jan 27 11:24:43 addons-334107 kubelet[1494]: E0127 11:24:43.031818    1494 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/1cade1bd7ed9d7de989649b72071303b972232a4e67e2ee16fcc90d1f1b0378e/diff" to get inode usage: stat /var/lib/containers/storage/overlay/1cade1bd7ed9d7de989649b72071303b972232a4e67e2ee16fcc90d1f1b0378e/diff: no such file or directory, extraDiskErr: <nil>
	Jan 27 11:24:43 addons-334107 kubelet[1494]: E0127 11:24:43.042060    1494 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/e554af701c79c82a2e1e0d8f765bf9453aeafb40009189570fcf0b463b84b87f/diff" to get inode usage: stat /var/lib/containers/storage/overlay/e554af701c79c82a2e1e0d8f765bf9453aeafb40009189570fcf0b463b84b87f/diff: no such file or directory, extraDiskErr: <nil>
	Jan 27 11:24:43 addons-334107 kubelet[1494]: E0127 11:24:43.043182    1494 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/1cade1bd7ed9d7de989649b72071303b972232a4e67e2ee16fcc90d1f1b0378e/diff" to get inode usage: stat /var/lib/containers/storage/overlay/1cade1bd7ed9d7de989649b72071303b972232a4e67e2ee16fcc90d1f1b0378e/diff: no such file or directory, extraDiskErr: <nil>
	Jan 27 11:24:43 addons-334107 kubelet[1494]: E0127 11:24:43.088385    1494 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737977083087977649,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595443,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 11:24:43 addons-334107 kubelet[1494]: E0127 11:24:43.088422    1494 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737977083087977649,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595443,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 11:24:47 addons-334107 kubelet[1494]: E0127 11:24:47.086636    1494 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/dcd0f37f7b3a45ccccf4dea3a39d745d0af841a98e93a3f8888dc8d2365bfaea/diff" to get inode usage: stat /var/lib/containers/storage/overlay/dcd0f37f7b3a45ccccf4dea3a39d745d0af841a98e93a3f8888dc8d2365bfaea/diff: no such file or directory, extraDiskErr: <nil>
	Jan 27 11:24:53 addons-334107 kubelet[1494]: E0127 11:24:53.090669    1494 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737977093090431685,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595443,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 11:24:53 addons-334107 kubelet[1494]: E0127 11:24:53.090717    1494 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737977093090431685,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595443,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 11:25:02 addons-334107 kubelet[1494]: I0127 11:25:02.878896    1494 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Jan 27 11:25:03 addons-334107 kubelet[1494]: E0127 11:25:03.093403    1494 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737977103093123440,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595443,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 11:25:03 addons-334107 kubelet[1494]: E0127 11:25:03.093444    1494 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737977103093123440,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595443,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 11:25:07 addons-334107 kubelet[1494]: I0127 11:25:07.669843    1494 memory_manager.go:355] "RemoveStaleState removing state" podUID="564b2cfb-178b-48f0-9be6-0f482b0d0abb" containerName="csi-resizer"
	Jan 27 11:25:07 addons-334107 kubelet[1494]: I0127 11:25:07.669884    1494 memory_manager.go:355] "RemoveStaleState removing state" podUID="56a619e8-8993-4ad8-ba50-d83fce7f98b6" containerName="csi-provisioner"
	Jan 27 11:25:07 addons-334107 kubelet[1494]: I0127 11:25:07.669893    1494 memory_manager.go:355] "RemoveStaleState removing state" podUID="56a619e8-8993-4ad8-ba50-d83fce7f98b6" containerName="csi-snapshotter"
	Jan 27 11:25:07 addons-334107 kubelet[1494]: I0127 11:25:07.669902    1494 memory_manager.go:355] "RemoveStaleState removing state" podUID="da7aee75-89ae-4e2d-893c-b9814df2a20b" containerName="task-pv-container"
	Jan 27 11:25:07 addons-334107 kubelet[1494]: I0127 11:25:07.669908    1494 memory_manager.go:355] "RemoveStaleState removing state" podUID="56a619e8-8993-4ad8-ba50-d83fce7f98b6" containerName="csi-external-health-monitor-controller"
	Jan 27 11:25:07 addons-334107 kubelet[1494]: I0127 11:25:07.669917    1494 memory_manager.go:355] "RemoveStaleState removing state" podUID="56a619e8-8993-4ad8-ba50-d83fce7f98b6" containerName="node-driver-registrar"
	Jan 27 11:25:07 addons-334107 kubelet[1494]: I0127 11:25:07.669925    1494 memory_manager.go:355] "RemoveStaleState removing state" podUID="56a619e8-8993-4ad8-ba50-d83fce7f98b6" containerName="hostpath"
	Jan 27 11:25:07 addons-334107 kubelet[1494]: I0127 11:25:07.669931    1494 memory_manager.go:355] "RemoveStaleState removing state" podUID="2c0c146e-9f93-4a77-8515-0be41dfa9c9a" containerName="volume-snapshot-controller"
	Jan 27 11:25:07 addons-334107 kubelet[1494]: I0127 11:25:07.669938    1494 memory_manager.go:355] "RemoveStaleState removing state" podUID="7c932eb0-5955-4351-ac80-7ff157d8abb5" containerName="csi-attacher"
	Jan 27 11:25:07 addons-334107 kubelet[1494]: I0127 11:25:07.669944    1494 memory_manager.go:355] "RemoveStaleState removing state" podUID="56a619e8-8993-4ad8-ba50-d83fce7f98b6" containerName="liveness-probe"
	Jan 27 11:25:07 addons-334107 kubelet[1494]: I0127 11:25:07.669951    1494 memory_manager.go:355] "RemoveStaleState removing state" podUID="1fdd5db8-41c7-4633-b52f-8b86c96fdb15" containerName="volume-snapshot-controller"
	Jan 27 11:25:07 addons-334107 kubelet[1494]: I0127 11:25:07.746699    1494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl87n\" (UniqueName: \"kubernetes.io/projected/c6c832d2-b15c-4587-9f93-8da1ac799842-kube-api-access-fl87n\") pod \"hello-world-app-7d9564db4-vvfpk\" (UID: \"c6c832d2-b15c-4587-9f93-8da1ac799842\") " pod="default/hello-world-app-7d9564db4-vvfpk"
	Jan 27 11:25:08 addons-334107 kubelet[1494]: W0127 11:25:08.054265    1494 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/e75c4376fc5aa457bbcb5fe886c60de56bfdf330b1f67dd618e9e622acf2db82/crio-c926312c644aa16e6c9d56266bc40a9bade67c71d5b7b7d2c599f33412b8a7b3 WatchSource:0}: Error finding container c926312c644aa16e6c9d56266bc40a9bade67c71d5b7b7d2c599f33412b8a7b3: Status 404 returned error can't find the container with id c926312c644aa16e6c9d56266bc40a9bade67c71d5b7b7d2c599f33412b8a7b3
	
	
	==> storage-provisioner [a4bcc12650a770db6be684c21e3ec176d146387a7f52871f2bc91662d9a2a765] <==
	I0127 11:19:31.527416       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0127 11:19:31.564133       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0127 11:19:31.565145       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0127 11:19:31.572695       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0127 11:19:31.572935       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-334107_cbdc103e-72ea-47af-8ff5-e5a4a785a337!
	I0127 11:19:31.573880       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"06c10730-32d2-4a1c-96f8-20e3d393adac", APIVersion:"v1", ResourceVersion:"916", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-334107_cbdc103e-72ea-47af-8ff5-e5a4a785a337 became leader
	I0127 11:19:31.673195       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-334107_cbdc103e-72ea-47af-8ff5-e5a4a785a337!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-334107 -n addons-334107
helpers_test.go:261: (dbg) Run:  kubectl --context addons-334107 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-692jz ingress-nginx-admission-patch-z4mkd
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-334107 describe pod ingress-nginx-admission-create-692jz ingress-nginx-admission-patch-z4mkd
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-334107 describe pod ingress-nginx-admission-create-692jz ingress-nginx-admission-patch-z4mkd: exit status 1 (100.734461ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-692jz" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-z4mkd" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-334107 describe pod ingress-nginx-admission-create-692jz ingress-nginx-admission-patch-z4mkd: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-334107 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-334107 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-334107 addons disable ingress --alsologtostderr -v=1: (7.745706601s)
--- FAIL: TestAddons/parallel/Ingress (153.92s)

                                                
                                    
x
+
TestPreload (2404.7s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-250630 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0127 11:55:58.654902  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:58:29.176877  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/functional-979480/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:59:01.719683  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:00:58.659959  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:03:29.176631  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/functional-979480/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:05:58.651713  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:06:32.255563  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/functional-979480/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:08:29.177267  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/functional-979480/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:10:58.658585  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:13:29.177303  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/functional-979480/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:15:41.722799  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:15:58.660543  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:18:29.176603  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/functional-979480/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:20:58.660331  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:23:12.258569  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/functional-979480/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:23:29.177157  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/functional-979480/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:25:58.659415  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:28:29.177278  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/functional-979480/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:30:58.651710  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:32:21.725008  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:33:29.176446  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/functional-979480/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p test-preload-250630 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: signal: killed (40m0.012703072s)

                                                
                                                
-- stdout --
	* [test-preload-250630] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20319
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20319-300538/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-300538/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "test-preload-250630" primary control-plane node in "test-preload-250630" cluster
	* Pulling base image v0.0.46 ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.24.4 on CRI-O 1.24.6 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 11:54:14.294778  434272 out.go:345] Setting OutFile to fd 1 ...
	I0127 11:54:14.295007  434272 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:54:14.295017  434272 out.go:358] Setting ErrFile to fd 2...
	I0127 11:54:14.295022  434272 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:54:14.295314  434272 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-300538/.minikube/bin
	I0127 11:54:14.295756  434272 out.go:352] Setting JSON to false
	I0127 11:54:14.296698  434272 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":9402,"bootTime":1737969453,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0127 11:54:14.296770  434272 start.go:139] virtualization:  
	I0127 11:54:14.301032  434272 out.go:177] * [test-preload-250630] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0127 11:54:14.305582  434272 out.go:177]   - MINIKUBE_LOCATION=20319
	I0127 11:54:14.305785  434272 notify.go:220] Checking for updates...
	I0127 11:54:14.312445  434272 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 11:54:14.315673  434272 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20319-300538/kubeconfig
	I0127 11:54:14.319130  434272 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-300538/.minikube
	I0127 11:54:14.322376  434272 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0127 11:54:14.325612  434272 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 11:54:14.329024  434272 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 11:54:14.361704  434272 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0127 11:54:14.361819  434272 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 11:54:14.419504  434272 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:43 SystemTime:2025-01-27 11:54:14.41013069 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 11:54:14.419618  434272 docker.go:318] overlay module found
	I0127 11:54:14.424720  434272 out.go:177] * Using the docker driver based on user configuration
	I0127 11:54:14.427605  434272 start.go:297] selected driver: docker
	I0127 11:54:14.427634  434272 start.go:901] validating driver "docker" against <nil>
	I0127 11:54:14.427649  434272 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 11:54:14.428397  434272 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 11:54:14.484932  434272 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:43 SystemTime:2025-01-27 11:54:14.476031592 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 11:54:14.485188  434272 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 11:54:14.485450  434272 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 11:54:14.488352  434272 out.go:177] * Using Docker driver with root privileges
	I0127 11:54:14.491268  434272 cni.go:84] Creating CNI manager for ""
	I0127 11:54:14.491332  434272 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0127 11:54:14.491360  434272 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0127 11:54:14.491470  434272 start.go:340] cluster config:
	{Name:test-preload-250630 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-250630 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 G
PUs: AutoPauseInterval:1m0s}
	I0127 11:54:14.494565  434272 out.go:177] * Starting "test-preload-250630" primary control-plane node in "test-preload-250630" cluster
	I0127 11:54:14.497436  434272 cache.go:121] Beginning downloading kic base image for docker with crio
	I0127 11:54:14.500299  434272 out.go:177] * Pulling base image v0.0.46 ...
	I0127 11:54:14.503167  434272 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0127 11:54:14.503259  434272 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I0127 11:54:14.503577  434272 profile.go:143] Saving config to /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/test-preload-250630/config.json ...
	I0127 11:54:14.503618  434272 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/test-preload-250630/config.json: {Name:mk57dd6cf3c53c6ada1352fee437504d65222850 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:54:14.503895  434272 cache.go:107] acquiring lock: {Name:mk9692d47e200f11d4993f236bd01b0a253b91b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:54:14.504050  434272 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:54:14.504442  434272 cache.go:107] acquiring lock: {Name:mk190b5e35ee09ef09980447d286561c72c39a13 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:54:14.504618  434272 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0127 11:54:14.504880  434272 cache.go:107] acquiring lock: {Name:mkc069e0a063b6e8c91c2f43ee1592d05b5686fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:54:14.505014  434272 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0127 11:54:14.505261  434272 cache.go:107] acquiring lock: {Name:mkb9ee01f6fbb348522e3d1ad78b1802312feffa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:54:14.505385  434272 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0127 11:54:14.505642  434272 cache.go:107] acquiring lock: {Name:mkc5535169ee61955686542402fc72777a25f235 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:54:14.505763  434272 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0127 11:54:14.505973  434272 cache.go:107] acquiring lock: {Name:mk09db466f53eb520fd7ab5dc91e364b760e662e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:54:14.506094  434272 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0127 11:54:14.506314  434272 cache.go:107] acquiring lock: {Name:mk7b4b2fb26cbf6e45f6b6c1738d7a2530518e19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:54:14.506427  434272 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0127 11:54:14.506669  434272 cache.go:107] acquiring lock: {Name:mkf29f668ed0239f52497c64b0fe1330729cd339 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:54:14.506836  434272 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0127 11:54:14.509198  434272 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0127 11:54:14.509605  434272 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0127 11:54:14.509787  434272 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0127 11:54:14.509963  434272 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:54:14.510754  434272 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0127 11:54:14.510952  434272 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0127 11:54:14.511114  434272 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0127 11:54:14.511262  434272 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0127 11:54:14.529390  434272 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon, skipping pull
	I0127 11:54:14.529415  434272 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in daemon, skipping load
	I0127 11:54:14.529434  434272 cache.go:227] Successfully downloaded all kic artifacts
	I0127 11:54:14.529477  434272 start.go:360] acquireMachinesLock for test-preload-250630: {Name:mk87d4fcbdbe0721e27b7c7e4174fc1b79e5479a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:54:14.529610  434272 start.go:364] duration metric: took 112.525µs to acquireMachinesLock for "test-preload-250630"
	I0127 11:54:14.529644  434272 start.go:93] Provisioning new machine with config: &{Name:test-preload-250630 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-250630 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 11:54:14.529726  434272 start.go:125] createHost starting for "" (driver="docker")
	I0127 11:54:14.533631  434272 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0127 11:54:14.533901  434272 start.go:159] libmachine.API.Create for "test-preload-250630" (driver="docker")
	I0127 11:54:14.533940  434272 client.go:168] LocalClient.Create starting
	I0127 11:54:14.534010  434272 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20319-300538/.minikube/certs/ca.pem
	I0127 11:54:14.534052  434272 main.go:141] libmachine: Decoding PEM data...
	I0127 11:54:14.534081  434272 main.go:141] libmachine: Parsing certificate...
	I0127 11:54:14.534141  434272 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20319-300538/.minikube/certs/cert.pem
	I0127 11:54:14.534165  434272 main.go:141] libmachine: Decoding PEM data...
	I0127 11:54:14.534175  434272 main.go:141] libmachine: Parsing certificate...
	I0127 11:54:14.534543  434272 cli_runner.go:164] Run: docker network inspect test-preload-250630 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0127 11:54:14.565871  434272 cli_runner.go:211] docker network inspect test-preload-250630 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0127 11:54:14.565962  434272 network_create.go:284] running [docker network inspect test-preload-250630] to gather additional debugging logs...
	I0127 11:54:14.565982  434272 cli_runner.go:164] Run: docker network inspect test-preload-250630
	W0127 11:54:14.585049  434272 cli_runner.go:211] docker network inspect test-preload-250630 returned with exit code 1
	I0127 11:54:14.585096  434272 network_create.go:287] error running [docker network inspect test-preload-250630]: docker network inspect test-preload-250630: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network test-preload-250630 not found
	I0127 11:54:14.585115  434272 network_create.go:289] output of [docker network inspect test-preload-250630]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network test-preload-250630 not found
	
	** /stderr **
	I0127 11:54:14.585223  434272 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0127 11:54:14.600477  434272 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-83a41a4be89e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:bb:86:ff:d6} reservation:<nil>}
	I0127 11:54:14.600866  434272 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b8647f61e26c IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:4f:9a:96:61} reservation:<nil>}
	I0127 11:54:14.601141  434272 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8a54f92038ad IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:21:b4:54:50} reservation:<nil>}
	I0127 11:54:14.601544  434272 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001c2d1a0}
	I0127 11:54:14.601571  434272 network_create.go:124] attempt to create docker network test-preload-250630 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0127 11:54:14.601625  434272 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=test-preload-250630 test-preload-250630
	I0127 11:54:14.688668  434272 network_create.go:108] docker network test-preload-250630 192.168.76.0/24 created
	I0127 11:54:14.688709  434272 kic.go:121] calculated static IP "192.168.76.2" for the "test-preload-250630" container
	I0127 11:54:14.688826  434272 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0127 11:54:14.705612  434272 cli_runner.go:164] Run: docker volume create test-preload-250630 --label name.minikube.sigs.k8s.io=test-preload-250630 --label created_by.minikube.sigs.k8s.io=true
	I0127 11:54:14.724647  434272 oci.go:103] Successfully created a docker volume test-preload-250630
	I0127 11:54:14.724744  434272 cli_runner.go:164] Run: docker run --rm --name test-preload-250630-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=test-preload-250630 --entrypoint /usr/bin/test -v test-preload-250630:/var gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -d /var/lib
	I0127 11:54:15.001333  434272 cache.go:162] opening:  /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	W0127 11:54:15.063796  434272 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0127 11:54:15.063870  434272 cache.go:162] opening:  /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0127 11:54:15.074273  434272 cache.go:157] /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0127 11:54:15.074306  434272 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 568.335823ms
	I0127 11:54:15.074328  434272 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	I0127 11:54:15.076368  434272 cache.go:162] opening:  /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0127 11:54:15.076912  434272 cache.go:162] opening:  /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0127 11:54:15.079259  434272 cache.go:162] opening:  /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0127 11:54:15.082384  434272 cache.go:162] opening:  /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0127 11:54:15.099420  434272 cache.go:162] opening:  /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0127 11:54:15.280426  434272 cache.go:157] /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0127 11:54:15.280527  434272 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 773.859946ms
	I0127 11:54:15.280591  434272 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0127 11:54:15.411788  434272 cache.go:157] /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0127 11:54:15.411882  434272 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 906.62328ms
	I0127 11:54:15.411939  434272 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0127 11:54:15.567629  434272 oci.go:107] Successfully prepared a docker volume test-preload-250630
	I0127 11:54:15.567666  434272 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	W0127 11:54:15.567811  434272 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0127 11:54:15.567938  434272 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0127 11:54:15.584968  434272 cache.go:157] /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0127 11:54:15.584996  434272 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 1.080120921s
	I0127 11:54:15.585009  434272 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0127 11:54:15.605215  434272 cache.go:157] /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0127 11:54:15.605249  434272 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 1.100812044s
	I0127 11:54:15.605264  434272 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0127 11:54:15.632043  434272 cache.go:157] /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0127 11:54:15.632245  434272 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 1.126595088s
	I0127 11:54:15.632301  434272 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0127 11:54:15.694010  434272 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname test-preload-250630 --name test-preload-250630 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=test-preload-250630 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=test-preload-250630 --network test-preload-250630 --ip 192.168.76.2 --volume test-preload-250630:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279
	W0127 11:54:15.709245  434272 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0127 11:54:15.709303  434272 cache.go:162] opening:  /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0127 11:54:16.209607  434272 cli_runner.go:164] Run: docker container inspect test-preload-250630 --format={{.State.Running}}
	I0127 11:54:16.242121  434272 cli_runner.go:164] Run: docker container inspect test-preload-250630 --format={{.State.Status}}
	I0127 11:54:16.299591  434272 cli_runner.go:164] Run: docker exec test-preload-250630 stat /var/lib/dpkg/alternatives/iptables
	I0127 11:54:16.332092  434272 cache.go:157] /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0127 11:54:16.332120  434272 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.828227157s
	I0127 11:54:16.332133  434272 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0127 11:54:16.382541  434272 oci.go:144] the created container "test-preload-250630" has a running status.
	I0127 11:54:16.382580  434272 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20319-300538/.minikube/machines/test-preload-250630/id_rsa...
	I0127 11:54:16.392769  434272 cache.go:157] /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0127 11:54:16.392797  434272 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 1.886491073s
	I0127 11:54:16.392809  434272 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0127 11:54:16.392820  434272 cache.go:87] Successfully saved all images to host disk.
	I0127 11:54:16.625391  434272 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20319-300538/.minikube/machines/test-preload-250630/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0127 11:54:16.655186  434272 cli_runner.go:164] Run: docker container inspect test-preload-250630 --format={{.State.Status}}
	I0127 11:54:16.681000  434272 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0127 11:54:16.681024  434272 kic_runner.go:114] Args: [docker exec --privileged test-preload-250630 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0127 11:54:16.738178  434272 cli_runner.go:164] Run: docker container inspect test-preload-250630 --format={{.State.Status}}
	I0127 11:54:16.762899  434272 machine.go:93] provisionDockerMachine start ...
	I0127 11:54:16.762997  434272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-250630
	I0127 11:54:16.790546  434272 main.go:141] libmachine: Using SSH client type: native
	I0127 11:54:16.790822  434272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 33323 <nil> <nil>}
	I0127 11:54:16.790834  434272 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 11:54:16.791509  434272 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34796->127.0.0.1:33323: read: connection reset by peer
	I0127 11:54:19.914655  434272 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-250630
	
	I0127 11:54:19.914723  434272 ubuntu.go:169] provisioning hostname "test-preload-250630"
	I0127 11:54:19.914823  434272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-250630
	I0127 11:54:19.932544  434272 main.go:141] libmachine: Using SSH client type: native
	I0127 11:54:19.932801  434272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 33323 <nil> <nil>}
	I0127 11:54:19.932819  434272 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-250630 && echo "test-preload-250630" | sudo tee /etc/hostname
	I0127 11:54:20.080621  434272 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-250630
	
	I0127 11:54:20.080751  434272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-250630
	I0127 11:54:20.101062  434272 main.go:141] libmachine: Using SSH client type: native
	I0127 11:54:20.101325  434272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 33323 <nil> <nil>}
	I0127 11:54:20.101351  434272 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-250630' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-250630/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-250630' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 11:54:20.227222  434272 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 11:54:20.227252  434272 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20319-300538/.minikube CaCertPath:/home/jenkins/minikube-integration/20319-300538/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20319-300538/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20319-300538/.minikube}
	I0127 11:54:20.227272  434272 ubuntu.go:177] setting up certificates
	I0127 11:54:20.227282  434272 provision.go:84] configureAuth start
	I0127 11:54:20.227361  434272 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-250630
	I0127 11:54:20.245308  434272 provision.go:143] copyHostCerts
	I0127 11:54:20.245390  434272 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-300538/.minikube/key.pem, removing ...
	I0127 11:54:20.245403  434272 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-300538/.minikube/key.pem
	I0127 11:54:20.245479  434272 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-300538/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20319-300538/.minikube/key.pem (1679 bytes)
	I0127 11:54:20.245573  434272 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-300538/.minikube/ca.pem, removing ...
	I0127 11:54:20.245582  434272 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-300538/.minikube/ca.pem
	I0127 11:54:20.245608  434272 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-300538/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20319-300538/.minikube/ca.pem (1082 bytes)
	I0127 11:54:20.245665  434272 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-300538/.minikube/cert.pem, removing ...
	I0127 11:54:20.245673  434272 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-300538/.minikube/cert.pem
	I0127 11:54:20.245696  434272 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-300538/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20319-300538/.minikube/cert.pem (1123 bytes)
	I0127 11:54:20.245755  434272 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20319-300538/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20319-300538/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20319-300538/.minikube/certs/ca-key.pem org=jenkins.test-preload-250630 san=[127.0.0.1 192.168.76.2 localhost minikube test-preload-250630]
	I0127 11:54:21.089137  434272 provision.go:177] copyRemoteCerts
	I0127 11:54:21.089238  434272 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 11:54:21.089289  434272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-250630
	I0127 11:54:21.106780  434272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33323 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/test-preload-250630/id_rsa Username:docker}
	I0127 11:54:21.195921  434272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0127 11:54:21.220806  434272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0127 11:54:21.245131  434272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 11:54:21.269331  434272 provision.go:87] duration metric: took 1.04203529s to configureAuth
	I0127 11:54:21.269357  434272 ubuntu.go:193] setting minikube options for container-runtime
	I0127 11:54:21.269540  434272 config.go:182] Loaded profile config "test-preload-250630": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0127 11:54:21.269648  434272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-250630
	I0127 11:54:21.286758  434272 main.go:141] libmachine: Using SSH client type: native
	I0127 11:54:21.286996  434272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 33323 <nil> <nil>}
	I0127 11:54:21.287018  434272 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 11:54:21.514503  434272 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 11:54:21.514532  434272 machine.go:96] duration metric: took 4.751605663s to provisionDockerMachine
	I0127 11:54:21.514543  434272 client.go:171] duration metric: took 6.980588018s to LocalClient.Create
	I0127 11:54:21.514556  434272 start.go:167] duration metric: took 6.980655382s to libmachine.API.Create "test-preload-250630"
	I0127 11:54:21.514564  434272 start.go:293] postStartSetup for "test-preload-250630" (driver="docker")
	I0127 11:54:21.514575  434272 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 11:54:21.514641  434272 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 11:54:21.514687  434272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-250630
	I0127 11:54:21.531456  434272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33323 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/test-preload-250630/id_rsa Username:docker}
	I0127 11:54:21.620621  434272 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 11:54:21.623840  434272 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0127 11:54:21.623879  434272 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0127 11:54:21.623890  434272 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0127 11:54:21.623898  434272 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0127 11:54:21.623909  434272 filesync.go:126] Scanning /home/jenkins/minikube-integration/20319-300538/.minikube/addons for local assets ...
	I0127 11:54:21.623975  434272 filesync.go:126] Scanning /home/jenkins/minikube-integration/20319-300538/.minikube/files for local assets ...
	I0127 11:54:21.624059  434272 filesync.go:149] local asset: /home/jenkins/minikube-integration/20319-300538/.minikube/files/etc/ssl/certs/3059362.pem -> 3059362.pem in /etc/ssl/certs
	I0127 11:54:21.624166  434272 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 11:54:21.632450  434272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/files/etc/ssl/certs/3059362.pem --> /etc/ssl/certs/3059362.pem (1708 bytes)
	I0127 11:54:21.656764  434272 start.go:296] duration metric: took 142.183943ms for postStartSetup
	I0127 11:54:21.657131  434272 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-250630
	I0127 11:54:21.678254  434272 profile.go:143] Saving config to /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/test-preload-250630/config.json ...
	I0127 11:54:21.678545  434272 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 11:54:21.678599  434272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-250630
	I0127 11:54:21.695997  434272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33323 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/test-preload-250630/id_rsa Username:docker}
	I0127 11:54:21.779835  434272 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0127 11:54:21.784289  434272 start.go:128] duration metric: took 7.254549085s to createHost
	I0127 11:54:21.784312  434272 start.go:83] releasing machines lock for "test-preload-250630", held for 7.254686816s
	I0127 11:54:21.784387  434272 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-250630
	I0127 11:54:21.801300  434272 ssh_runner.go:195] Run: cat /version.json
	I0127 11:54:21.801364  434272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-250630
	I0127 11:54:21.801617  434272 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 11:54:21.801676  434272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-250630
	I0127 11:54:21.829895  434272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33323 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/test-preload-250630/id_rsa Username:docker}
	I0127 11:54:21.840687  434272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33323 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/test-preload-250630/id_rsa Username:docker}
	I0127 11:54:21.914567  434272 ssh_runner.go:195] Run: systemctl --version
	I0127 11:54:22.051213  434272 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 11:54:22.192014  434272 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0127 11:54:22.196266  434272 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 11:54:22.220232  434272 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0127 11:54:22.220363  434272 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 11:54:22.253861  434272 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0127 11:54:22.253921  434272 start.go:495] detecting cgroup driver to use...
	I0127 11:54:22.253970  434272 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0127 11:54:22.254041  434272 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 11:54:22.270169  434272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 11:54:22.282609  434272 docker.go:217] disabling cri-docker service (if available) ...
	I0127 11:54:22.282713  434272 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 11:54:22.296604  434272 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 11:54:22.310967  434272 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 11:54:22.399562  434272 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 11:54:22.491967  434272 docker.go:233] disabling docker service ...
	I0127 11:54:22.492043  434272 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 11:54:22.514297  434272 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 11:54:22.527228  434272 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 11:54:22.616182  434272 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 11:54:22.717952  434272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 11:54:22.729182  434272 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 11:54:22.745808  434272 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0127 11:54:22.745922  434272 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:54:22.756612  434272 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 11:54:22.756725  434272 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:54:22.767861  434272 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:54:22.778681  434272 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:54:22.789716  434272 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 11:54:22.799600  434272 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:54:22.810176  434272 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:54:22.826826  434272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:54:22.837276  434272 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 11:54:22.845784  434272 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 11:54:22.854498  434272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:54:22.944430  434272 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 11:54:23.057810  434272 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 11:54:23.057968  434272 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 11:54:23.061928  434272 start.go:563] Will wait 60s for crictl version
	I0127 11:54:23.062064  434272 ssh_runner.go:195] Run: which crictl
	I0127 11:54:23.065779  434272 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 11:54:23.105119  434272 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0127 11:54:23.105212  434272 ssh_runner.go:195] Run: crio --version
	I0127 11:54:23.144687  434272 ssh_runner.go:195] Run: crio --version
	I0127 11:54:23.188431  434272 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.24.6 ...
	I0127 11:54:23.191397  434272 cli_runner.go:164] Run: docker network inspect test-preload-250630 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0127 11:54:23.208019  434272 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0127 11:54:23.211828  434272 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 11:54:23.222783  434272 kubeadm.go:883] updating cluster {Name:test-preload-250630 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-250630 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 11:54:23.222899  434272 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0127 11:54:23.222946  434272 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 11:54:23.258229  434272 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0127 11:54:23.258258  434272 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0127 11:54:23.258302  434272 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:54:23.258331  434272 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0127 11:54:23.258511  434272 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0127 11:54:23.258523  434272 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0127 11:54:23.258601  434272 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0127 11:54:23.258610  434272 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0127 11:54:23.258681  434272 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0127 11:54:23.258512  434272 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0127 11:54:23.261142  434272 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0127 11:54:23.261194  434272 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0127 11:54:23.261248  434272 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0127 11:54:23.261142  434272 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0127 11:54:23.261392  434272 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0127 11:54:23.261522  434272 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0127 11:54:23.261587  434272 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:54:23.261640  434272 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0127 11:54:23.630402  434272 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0127 11:54:23.669672  434272 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0127 11:54:23.669710  434272 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0127 11:54:23.669765  434272 ssh_runner.go:195] Run: which crictl
	I0127 11:54:23.673598  434272 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0127 11:54:23.712902  434272 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0127 11:54:23.716759  434272 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0127 11:54:23.722340  434272 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0127 11:54:23.723348  434272 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0127 11:54:23.730803  434272 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0127 11:54:23.732391  434272 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	W0127 11:54:23.734358  434272 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0127 11:54:23.734552  434272 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0127 11:54:23.771335  434272 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0127 11:54:23.854883  434272 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "bd8cc6d58247078a865774b7f516f8afc3ac8cd080fd49650ca30ef2fbc6ebd1" in container runtime
	I0127 11:54:23.854936  434272 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0127 11:54:23.854987  434272 ssh_runner.go:195] Run: which crictl
	I0127 11:54:23.894695  434272 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0127 11:54:23.894739  434272 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0127 11:54:23.894788  434272 ssh_runner.go:195] Run: which crictl
	I0127 11:54:23.894858  434272 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "5753e4610b3ec0ac100c3535b8d8a7507b3d031148e168c2c3c4b0f389976074" in container runtime
	I0127 11:54:23.894878  434272 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0127 11:54:23.894916  434272 ssh_runner.go:195] Run: which crictl
	I0127 11:54:23.925926  434272 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "3767741e7fba72f328a8500a18ef34481343eb78697e31ae5bf3e390a28317ae" in container runtime
	I0127 11:54:23.925969  434272 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0127 11:54:23.926023  434272 ssh_runner.go:195] Run: which crictl
	I0127 11:54:23.931918  434272 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "81a4a8a4ac639bdd7e118359417a80cab1a0d0e4737eb735714cf7f8b15dc0c7" in container runtime
	I0127 11:54:23.931964  434272 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0127 11:54:23.932014  434272 ssh_runner.go:195] Run: which crictl
	I0127 11:54:23.932099  434272 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0127 11:54:23.932117  434272 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0127 11:54:23.932140  434272 ssh_runner.go:195] Run: which crictl
	I0127 11:54:23.940483  434272 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0127 11:54:23.940558  434272 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0127 11:54:23.940627  434272 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0127 11:54:23.940695  434272 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0127 11:54:23.940742  434272 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0127 11:54:23.940792  434272 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0127 11:54:23.941858  434272 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0127 11:54:23.941921  434272 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0127 11:54:24.090184  434272 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0127 11:54:24.090262  434272 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0127 11:54:24.090283  434272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0127 11:54:24.090362  434272 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0127 11:54:24.090448  434272 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0127 11:54:24.090528  434272 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0127 11:54:24.090614  434272 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0127 11:54:24.090687  434272 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0127 11:54:24.289053  434272 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0127 11:54:24.289147  434272 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0127 11:54:24.289206  434272 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0127 11:54:24.289267  434272 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0127 11:54:24.289324  434272 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0127 11:54:24.289431  434272 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	W0127 11:54:24.293272  434272 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0127 11:54:24.293562  434272 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:54:24.500207  434272 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0127 11:54:24.500369  434272 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0127 11:54:24.500487  434272 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0127 11:54:24.500596  434272 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0127 11:54:24.500718  434272 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0127 11:54:24.500799  434272 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0127 11:54:24.500881  434272 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0127 11:54:24.500960  434272 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0127 11:54:24.501046  434272 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0127 11:54:24.501118  434272 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0127 11:54:24.501203  434272 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0127 11:54:24.501291  434272 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0127 11:54:24.501380  434272 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0127 11:54:24.501428  434272 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:54:24.501476  434272 ssh_runner.go:195] Run: which crictl
	W0127 11:54:24.510627  434272 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0127 11:54:24.510713  434272 retry.go:31] will retry after 170.997262ms: ssh: rejected: connect failed (open failed)
	I0127 11:54:24.550782  434272 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0127 11:54:24.550824  434272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0127 11:54:24.550881  434272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-250630
	I0127 11:54:24.551113  434272 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0127 11:54:24.551136  434272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0127 11:54:24.551175  434272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-250630
	I0127 11:54:24.551461  434272 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.24.4: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.24.4': No such file or directory
	I0127 11:54:24.551488  434272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 --> /var/lib/minikube/images/kube-apiserver_v1.24.4 (30873088 bytes)
	I0127 11:54:24.551528  434272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-250630
	I0127 11:54:24.551977  434272 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.24.4: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.24.4': No such file or directory
	I0127 11:54:24.552005  434272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 --> /var/lib/minikube/images/kube-controller-manager_v1.24.4 (28246528 bytes)
	I0127 11:54:24.552047  434272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-250630
	I0127 11:54:24.555903  434272 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.24.4: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.24.4': No such file or directory
	I0127 11:54:24.555952  434272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 --> /var/lib/minikube/images/kube-scheduler_v1.24.4 (14094336 bytes)
	I0127 11:54:24.556007  434272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-250630
	I0127 11:54:24.566766  434272 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.24.4: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.24.4': No such file or directory
	I0127 11:54:24.566806  434272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 --> /var/lib/minikube/images/kube-proxy_v1.24.4 (38148096 bytes)
	I0127 11:54:24.566862  434272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-250630
	I0127 11:54:24.605922  434272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33323 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/test-preload-250630/id_rsa Username:docker}
	I0127 11:54:24.629621  434272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33323 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/test-preload-250630/id_rsa Username:docker}
	I0127 11:54:24.636169  434272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33323 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/test-preload-250630/id_rsa Username:docker}
	I0127 11:54:24.649116  434272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33323 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/test-preload-250630/id_rsa Username:docker}
	I0127 11:54:24.686358  434272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33323 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/test-preload-250630/id_rsa Username:docker}
	I0127 11:54:24.688491  434272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33323 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/test-preload-250630/id_rsa Username:docker}
	I0127 11:54:24.843139  434272 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:54:25.073048  434272 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0127 11:54:25.073116  434272 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0127 11:54:25.239364  434272 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:54:25.836872  434272 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:54:25.836908  434272 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0127 11:54:25.837087  434272 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0127 11:54:25.837136  434272 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0127 11:54:29.180782  434272 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (3.343619653s)
	I0127 11:54:29.180808  434272 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0127 11:54:29.180818  434272 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.343837719s)
	I0127 11:54:29.180857  434272 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0127 11:54:29.180826  434272 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0127 11:54:29.180946  434272 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0127 11:54:29.180949  434272 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0127 11:54:30.041376  434272 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0127 11:54:30.041421  434272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0127 11:54:30.041533  434272 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0127 11:54:30.041558  434272 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0127 11:54:30.041605  434272 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0127 11:54:31.148571  434272 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4: (1.106942565s)
	I0127 11:54:31.148606  434272 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0127 11:54:31.148627  434272 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0127 11:54:31.148675  434272 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0127 11:54:33.522521  434272 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4: (2.373819276s)
	I0127 11:54:33.522552  434272 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0127 11:54:33.522574  434272 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0127 11:54:33.522628  434272 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0127 11:54:35.281593  434272 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4: (1.758934925s)
	I0127 11:54:35.281625  434272 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0127 11:54:35.281648  434272 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0127 11:54:35.281700  434272 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0127 11:54:37.133562  434272 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4: (1.851833696s)
	I0127 11:54:37.133589  434272 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0127 11:54:37.133611  434272 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0127 11:54:37.133661  434272 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0127 11:54:37.686097  434272 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0127 11:54:37.686134  434272 cache_images.go:123] Successfully loaded all cached images
	I0127 11:54:37.686140  434272 cache_images.go:92] duration metric: took 14.427868673s to LoadCachedImages
	I0127 11:54:37.686151  434272 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.24.4 crio true true} ...
	I0127 11:54:37.686246  434272 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=test-preload-250630 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-250630 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 11:54:37.686337  434272 ssh_runner.go:195] Run: crio config
	I0127 11:54:37.739114  434272 cni.go:84] Creating CNI manager for ""
	I0127 11:54:37.739139  434272 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0127 11:54:37.739151  434272 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 11:54:37.739175  434272 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-250630 NodeName:test-preload-250630 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 11:54:37.739313  434272 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-250630"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 11:54:37.739389  434272 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0127 11:54:37.748240  434272 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.24.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.24.4': No such file or directory
	
	Initiating transfer...
	I0127 11:54:37.748302  434272 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.24.4
	I0127 11:54:37.757168  434272 download.go:108] Downloading: https://dl.k8s.io/release/v1.24.4/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.24.4/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/20319-300538/.minikube/cache/linux/arm64/v1.24.4/kubectl
	I0127 11:54:37.757561  434272 download.go:108] Downloading: https://dl.k8s.io/release/v1.24.4/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.24.4/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/20319-300538/.minikube/cache/linux/arm64/v1.24.4/kubelet
	I0127 11:54:37.757730  434272 download.go:108] Downloading: https://dl.k8s.io/release/v1.24.4/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.24.4/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/20319-300538/.minikube/cache/linux/arm64/v1.24.4/kubeadm
	I0127 11:54:38.393755  434272 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.24.4/kubeadm
	I0127 11:54:38.399871  434272 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.24.4/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.24.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.24.4/kubeadm': No such file or directory
	I0127 11:54:38.399951  434272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/cache/linux/arm64/v1.24.4/kubeadm --> /var/lib/minikube/binaries/v1.24.4/kubeadm (43384832 bytes)
	I0127 11:54:38.551235  434272 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.24.4/kubectl
	I0127 11:54:38.582040  434272 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.24.4/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.24.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.24.4/kubectl': No such file or directory
	I0127 11:54:38.582145  434272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/cache/linux/arm64/v1.24.4/kubectl --> /var/lib/minikube/binaries/v1.24.4/kubectl (44564480 bytes)
	I0127 11:54:39.289524  434272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 11:54:39.302194  434272 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.24.4/kubelet
	I0127 11:54:39.305726  434272 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.24.4/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.24.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.24.4/kubelet': No such file or directory
	I0127 11:54:39.305762  434272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/cache/linux/arm64/v1.24.4/kubelet --> /var/lib/minikube/binaries/v1.24.4/kubelet (112477080 bytes)
	I0127 11:54:39.812956  434272 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 11:54:39.823588  434272 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0127 11:54:39.844492  434272 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 11:54:39.864630  434272 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I0127 11:54:39.883343  434272 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0127 11:54:39.886950  434272 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 11:54:39.897824  434272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:54:39.978879  434272 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 11:54:39.993042  434272 certs.go:68] Setting up /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/test-preload-250630 for IP: 192.168.76.2
	I0127 11:54:39.993066  434272 certs.go:194] generating shared ca certs ...
	I0127 11:54:39.993095  434272 certs.go:226] acquiring lock for ca certs: {Name:mk949cfe0d73736f3d2e354b486773524a8fcbe7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:54:39.993248  434272 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20319-300538/.minikube/ca.key
	I0127 11:54:39.993294  434272 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20319-300538/.minikube/proxy-client-ca.key
	I0127 11:54:39.993305  434272 certs.go:256] generating profile certs ...
	I0127 11:54:39.993375  434272 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/test-preload-250630/client.key
	I0127 11:54:39.993393  434272 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/test-preload-250630/client.crt with IP's: []
	I0127 11:54:40.208538  434272 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/test-preload-250630/client.crt ...
	I0127 11:54:40.208583  434272 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/test-preload-250630/client.crt: {Name:mkbe5f84f04fe2fb07110c5f88196f3897cee456 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:54:40.208831  434272 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/test-preload-250630/client.key ...
	I0127 11:54:40.208850  434272 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/test-preload-250630/client.key: {Name:mkafa36dd0ffb47e37684bef9e2739f2d5377e60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:54:40.208953  434272 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/test-preload-250630/apiserver.key.714fa92e
	I0127 11:54:40.208977  434272 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/test-preload-250630/apiserver.crt.714fa92e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0127 11:54:40.610197  434272 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/test-preload-250630/apiserver.crt.714fa92e ...
	I0127 11:54:40.610233  434272 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/test-preload-250630/apiserver.crt.714fa92e: {Name:mkbccbd20bbb8baa907eaab87a9a805a54d35e76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:54:40.610426  434272 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/test-preload-250630/apiserver.key.714fa92e ...
	I0127 11:54:40.610440  434272 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/test-preload-250630/apiserver.key.714fa92e: {Name:mk384ccec8960490a2a560ff304781b2ee8269b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:54:40.610528  434272 certs.go:381] copying /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/test-preload-250630/apiserver.crt.714fa92e -> /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/test-preload-250630/apiserver.crt
	I0127 11:54:40.610609  434272 certs.go:385] copying /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/test-preload-250630/apiserver.key.714fa92e -> /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/test-preload-250630/apiserver.key
	I0127 11:54:40.610672  434272 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/test-preload-250630/proxy-client.key
	I0127 11:54:40.610692  434272 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/test-preload-250630/proxy-client.crt with IP's: []
	I0127 11:54:41.329706  434272 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/test-preload-250630/proxy-client.crt ...
	I0127 11:54:41.329739  434272 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/test-preload-250630/proxy-client.crt: {Name:mk68ca5432b5b3e721d0cde1dd464db7453b1592 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:54:41.329929  434272 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/test-preload-250630/proxy-client.key ...
	I0127 11:54:41.329943  434272 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/test-preload-250630/proxy-client.key: {Name:mkd0785a978e80064f2312b070a14f97d0a0985c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:54:41.330131  434272 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-300538/.minikube/certs/305936.pem (1338 bytes)
	W0127 11:54:41.330179  434272 certs.go:480] ignoring /home/jenkins/minikube-integration/20319-300538/.minikube/certs/305936_empty.pem, impossibly tiny 0 bytes
	I0127 11:54:41.330194  434272 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-300538/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 11:54:41.330224  434272 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-300538/.minikube/certs/ca.pem (1082 bytes)
	I0127 11:54:41.330255  434272 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-300538/.minikube/certs/cert.pem (1123 bytes)
	I0127 11:54:41.330281  434272 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-300538/.minikube/certs/key.pem (1679 bytes)
	I0127 11:54:41.330329  434272 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-300538/.minikube/files/etc/ssl/certs/3059362.pem (1708 bytes)
	I0127 11:54:41.330972  434272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 11:54:41.355350  434272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0127 11:54:41.379600  434272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 11:54:41.404139  434272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0127 11:54:41.428380  434272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/test-preload-250630/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0127 11:54:41.452078  434272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/test-preload-250630/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 11:54:41.475686  434272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/test-preload-250630/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 11:54:41.500365  434272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/test-preload-250630/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 11:54:41.524473  434272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/files/etc/ssl/certs/3059362.pem --> /usr/share/ca-certificates/3059362.pem (1708 bytes)
	I0127 11:54:41.548901  434272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 11:54:41.574316  434272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/certs/305936.pem --> /usr/share/ca-certificates/305936.pem (1338 bytes)
	I0127 11:54:41.599982  434272 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 11:54:41.619210  434272 ssh_runner.go:195] Run: openssl version
	I0127 11:54:41.625010  434272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 11:54:41.635010  434272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:54:41.639500  434272 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 11:18 /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:54:41.639573  434272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:54:41.647899  434272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 11:54:41.657994  434272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/305936.pem && ln -fs /usr/share/ca-certificates/305936.pem /etc/ssl/certs/305936.pem"
	I0127 11:54:41.667826  434272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/305936.pem
	I0127 11:54:41.672539  434272 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 11:26 /usr/share/ca-certificates/305936.pem
	I0127 11:54:41.672654  434272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/305936.pem
	I0127 11:54:41.680316  434272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/305936.pem /etc/ssl/certs/51391683.0"
	I0127 11:54:41.692638  434272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3059362.pem && ln -fs /usr/share/ca-certificates/3059362.pem /etc/ssl/certs/3059362.pem"
	I0127 11:54:41.708379  434272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3059362.pem
	I0127 11:54:41.716668  434272 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 11:26 /usr/share/ca-certificates/3059362.pem
	I0127 11:54:41.716789  434272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3059362.pem
	I0127 11:54:41.728041  434272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3059362.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 11:54:41.738692  434272 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 11:54:41.742444  434272 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0127 11:54:41.742523  434272 kubeadm.go:392] StartCluster: {Name:test-preload-250630 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-250630 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Socket
VMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:54:41.742617  434272 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 11:54:41.742679  434272 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 11:54:41.782253  434272 cri.go:89] found id: ""
	I0127 11:54:41.782352  434272 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 11:54:41.791538  434272 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 11:54:41.800621  434272 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0127 11:54:41.800709  434272 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 11:54:41.809954  434272 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 11:54:41.809979  434272 kubeadm.go:157] found existing configuration files:
	
	I0127 11:54:41.810046  434272 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 11:54:41.819774  434272 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 11:54:41.819852  434272 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 11:54:41.829107  434272 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 11:54:41.838684  434272 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 11:54:41.838784  434272 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 11:54:41.847890  434272 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 11:54:41.857084  434272 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 11:54:41.857184  434272 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 11:54:41.866112  434272 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 11:54:41.875189  434272 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 11:54:41.875279  434272 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 11:54:41.884287  434272 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0127 11:54:41.931651  434272 kubeadm.go:310] [init] Using Kubernetes version: v1.24.4
	I0127 11:54:41.931927  434272 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 11:54:41.977546  434272 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0127 11:54:41.977657  434272 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1075-aws
	I0127 11:54:41.977730  434272 kubeadm.go:310] OS: Linux
	I0127 11:54:41.977800  434272 kubeadm.go:310] CGROUPS_CPU: enabled
	I0127 11:54:41.977862  434272 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0127 11:54:41.977918  434272 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0127 11:54:41.977971  434272 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0127 11:54:41.978023  434272 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0127 11:54:41.978130  434272 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0127 11:54:41.978185  434272 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0127 11:54:41.978239  434272 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0127 11:54:41.978290  434272 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0127 11:54:42.071719  434272 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 11:54:42.071922  434272 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 11:54:42.072045  434272 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 11:55:02.148362  434272 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 11:55:02.152044  434272 out.go:235]   - Generating certificates and keys ...
	I0127 11:55:02.152153  434272 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 11:55:02.152217  434272 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 11:55:02.776352  434272 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0127 11:55:03.490975  434272 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0127 11:55:03.928694  434272 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0127 11:55:04.196924  434272 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0127 11:55:04.413738  434272 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0127 11:55:04.414326  434272 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost test-preload-250630] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0127 11:55:04.608556  434272 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0127 11:55:04.608863  434272 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost test-preload-250630] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0127 11:55:05.154046  434272 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0127 11:55:05.462373  434272 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0127 11:55:05.667307  434272 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0127 11:55:05.667610  434272 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 11:55:05.875538  434272 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 11:55:06.547503  434272 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 11:55:06.992052  434272 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 11:55:07.940556  434272 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 11:55:08.027947  434272 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 11:55:08.028924  434272 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 11:55:08.029166  434272 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 11:55:08.131482  434272 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 11:55:08.135034  434272 out.go:235]   - Booting up control plane ...
	I0127 11:55:08.135163  434272 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 11:55:08.135241  434272 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 11:55:08.135317  434272 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 11:55:08.135728  434272 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 11:55:08.138325  434272 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 11:55:16.640084  434272 kubeadm.go:310] [apiclient] All control plane components are healthy after 8.502150 seconds
	I0127 11:55:16.640204  434272 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 11:55:16.654014  434272 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 11:55:17.175662  434272 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 11:55:17.175879  434272 kubeadm.go:310] [mark-control-plane] Marking the node test-preload-250630 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 11:55:17.686113  434272 kubeadm.go:310] [bootstrap-token] Using token: j4uj1s.qtvzwjfj9l0zqgva
	I0127 11:55:17.690486  434272 out.go:235]   - Configuring RBAC rules ...
	I0127 11:55:17.690619  434272 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 11:55:17.693818  434272 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 11:55:17.699537  434272 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 11:55:17.702234  434272 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 11:55:17.705044  434272 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 11:55:17.707251  434272 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 11:55:17.716828  434272 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 11:55:17.928251  434272 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 11:55:18.098669  434272 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 11:55:18.098691  434272 kubeadm.go:310] 
	I0127 11:55:18.098774  434272 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 11:55:18.098795  434272 kubeadm.go:310] 
	I0127 11:55:18.098902  434272 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 11:55:18.098912  434272 kubeadm.go:310] 
	I0127 11:55:18.098948  434272 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 11:55:18.099009  434272 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 11:55:18.099068  434272 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 11:55:18.099075  434272 kubeadm.go:310] 
	I0127 11:55:18.099129  434272 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 11:55:18.099134  434272 kubeadm.go:310] 
	I0127 11:55:18.099181  434272 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 11:55:18.099186  434272 kubeadm.go:310] 
	I0127 11:55:18.099237  434272 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 11:55:18.099345  434272 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 11:55:18.099425  434272 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 11:55:18.099432  434272 kubeadm.go:310] 
	I0127 11:55:18.099540  434272 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 11:55:18.099620  434272 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 11:55:18.099625  434272 kubeadm.go:310] 
	I0127 11:55:18.099756  434272 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token j4uj1s.qtvzwjfj9l0zqgva \
	I0127 11:55:18.099873  434272 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:48e67245c146e975d09f55afb746c1ac0255b920d803cd458fd330de66b03567 \
	I0127 11:55:18.099896  434272 kubeadm.go:310] 	--control-plane 
	I0127 11:55:18.099900  434272 kubeadm.go:310] 
	I0127 11:55:18.099985  434272 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 11:55:18.099989  434272 kubeadm.go:310] 
	I0127 11:55:18.100073  434272 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token j4uj1s.qtvzwjfj9l0zqgva \
	I0127 11:55:18.100175  434272 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:48e67245c146e975d09f55afb746c1ac0255b920d803cd458fd330de66b03567 
	I0127 11:55:18.107771  434272 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1075-aws\n", err: exit status 1
	I0127 11:55:18.107898  434272 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 11:55:18.107914  434272 cni.go:84] Creating CNI manager for ""
	I0127 11:55:18.107923  434272 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0127 11:55:18.111673  434272 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0127 11:55:18.114696  434272 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0127 11:55:18.126583  434272 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.24.4/kubectl ...
	I0127 11:55:18.126607  434272 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0127 11:55:18.165752  434272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0127 11:55:19.367445  434272 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.24.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.20165396s)
	I0127 11:55:19.367490  434272 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 11:55:19.367606  434272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:55:19.367686  434272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes test-preload-250630 minikube.k8s.io/updated_at=2025_01_27T11_55_19_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=35c230aa12d4986001aef5f6e29069f3bc5493aa minikube.k8s.io/name=test-preload-250630 minikube.k8s.io/primary=true
	I0127 11:55:19.496263  434272 ops.go:34] apiserver oom_adj: -16
	I0127 11:55:19.496358  434272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:55:19.996412  434272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:55:20.496713  434272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:55:20.996645  434272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:55:21.496490  434272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:55:21.997244  434272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:55:22.497183  434272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:55:22.997151  434272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:55:23.497378  434272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:55:23.997081  434272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:55:24.496638  434272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:55:24.996944  434272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:55:25.497462  434272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:55:25.996573  434272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:55:26.496463  434272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:55:26.997212  434272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:55:27.496596  434272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:55:27.996465  434272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:55:28.497070  434272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:55:28.996934  434272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:55:29.496941  434272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:55:29.996502  434272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:55:30.496400  434272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:55:30.997123  434272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:55:31.112499  434272 kubeadm.go:1113] duration metric: took 11.744940046s to wait for elevateKubeSystemPrivileges
	I0127 11:55:31.112528  434272 kubeadm.go:394] duration metric: took 49.370009314s to StartCluster
	I0127 11:55:31.112545  434272 settings.go:142] acquiring lock: {Name:mk59e26dfc61a439e501d9ae8e7cbc4a6f05e310 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:55:31.112608  434272 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20319-300538/kubeconfig
	I0127 11:55:31.113326  434272 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-300538/kubeconfig: {Name:mka2258aa0d8dec49c19d97bc831e58d42b19053 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:55:31.113540  434272 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 11:55:31.113633  434272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0127 11:55:31.113878  434272 config.go:182] Loaded profile config "test-preload-250630": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0127 11:55:31.113918  434272 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 11:55:31.114012  434272 addons.go:69] Setting storage-provisioner=true in profile "test-preload-250630"
	I0127 11:55:31.114023  434272 addons.go:69] Setting default-storageclass=true in profile "test-preload-250630"
	I0127 11:55:31.114054  434272 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-250630"
	I0127 11:55:31.114027  434272 addons.go:238] Setting addon storage-provisioner=true in "test-preload-250630"
	I0127 11:55:31.114157  434272 host.go:66] Checking if "test-preload-250630" exists ...
	I0127 11:55:31.114430  434272 cli_runner.go:164] Run: docker container inspect test-preload-250630 --format={{.State.Status}}
	I0127 11:55:31.114614  434272 cli_runner.go:164] Run: docker container inspect test-preload-250630 --format={{.State.Status}}
	I0127 11:55:31.116755  434272 out.go:177] * Verifying Kubernetes components...
	I0127 11:55:31.119997  434272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:55:31.157026  434272 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:55:31.160027  434272 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 11:55:31.160049  434272 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 11:55:31.160124  434272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-250630
	I0127 11:55:31.165317  434272 addons.go:238] Setting addon default-storageclass=true in "test-preload-250630"
	I0127 11:55:31.165360  434272 host.go:66] Checking if "test-preload-250630" exists ...
	I0127 11:55:31.165779  434272 cli_runner.go:164] Run: docker container inspect test-preload-250630 --format={{.State.Status}}
	I0127 11:55:31.194560  434272 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 11:55:31.194581  434272 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 11:55:31.194644  434272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-250630
	I0127 11:55:31.204828  434272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33323 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/test-preload-250630/id_rsa Username:docker}
	I0127 11:55:31.230978  434272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33323 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/test-preload-250630/id_rsa Username:docker}
	I0127 11:55:31.361764  434272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.24.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0127 11:55:31.376107  434272 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 11:55:31.427275  434272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 11:55:31.472224  434272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 11:55:31.873304  434272 start.go:971] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I0127 11:55:31.875220  434272 node_ready.go:35] waiting up to 6m0s for node "test-preload-250630" to be "Ready" ...
	W0127 11:55:31.951094  434272 kapi.go:211] failed rescaling "coredns" deployment in "kube-system" namespace and "test-preload-250630" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E0127 11:55:31.951157  434272 start.go:160] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0127 11:55:32.026299  434272 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0127 11:55:32.029148  434272 addons.go:514] duration metric: took 915.229099ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0127 11:55:33.879200  434272 node_ready.go:53] node "test-preload-250630" has status "Ready":"False"
	I0127 11:55:36.378763  434272 node_ready.go:53] node "test-preload-250630" has status "Ready":"False"
	I0127 11:55:38.878636  434272 node_ready.go:53] node "test-preload-250630" has status "Ready":"False"
	I0127 11:55:40.879345  434272 node_ready.go:53] node "test-preload-250630" has status "Ready":"False"
	I0127 11:55:43.379380  434272 node_ready.go:53] node "test-preload-250630" has status "Ready":"False"
	I0127 11:55:45.379990  434272 node_ready.go:53] node "test-preload-250630" has status "Ready":"False"
	I0127 11:55:47.879520  434272 node_ready.go:53] node "test-preload-250630" has status "Ready":"False"
	I0127 11:55:48.878550  434272 node_ready.go:49] node "test-preload-250630" has status "Ready":"True"
	I0127 11:55:48.878573  434272 node_ready.go:38] duration metric: took 17.00332387s for node "test-preload-250630" to be "Ready" ...
	I0127 11:55:48.878585  434272 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:55:48.889736  434272 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-hgtpg" in "kube-system" namespace to be "Ready" ...
	I0127 11:55:50.896505  434272 pod_ready.go:93] pod "coredns-6d4b75cb6d-hgtpg" in "kube-system" namespace has status "Ready":"True"
	I0127 11:55:50.896534  434272 pod_ready.go:82] duration metric: took 2.006709037s for pod "coredns-6d4b75cb6d-hgtpg" in "kube-system" namespace to be "Ready" ...
	I0127 11:55:50.896547  434272 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-zg4sc" in "kube-system" namespace to be "Ready" ...
	I0127 11:55:50.902649  434272 pod_ready.go:93] pod "coredns-6d4b75cb6d-zg4sc" in "kube-system" namespace has status "Ready":"True"
	I0127 11:55:50.902674  434272 pod_ready.go:82] duration metric: took 6.118886ms for pod "coredns-6d4b75cb6d-zg4sc" in "kube-system" namespace to be "Ready" ...
	I0127 11:55:50.902685  434272 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-250630" in "kube-system" namespace to be "Ready" ...
	I0127 11:55:50.908847  434272 pod_ready.go:93] pod "etcd-test-preload-250630" in "kube-system" namespace has status "Ready":"True"
	I0127 11:55:50.908877  434272 pod_ready.go:82] duration metric: took 6.183428ms for pod "etcd-test-preload-250630" in "kube-system" namespace to be "Ready" ...
	I0127 11:55:50.908893  434272 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-250630" in "kube-system" namespace to be "Ready" ...
	I0127 11:55:50.914745  434272 pod_ready.go:93] pod "kube-apiserver-test-preload-250630" in "kube-system" namespace has status "Ready":"True"
	I0127 11:55:50.914771  434272 pod_ready.go:82] duration metric: took 5.843606ms for pod "kube-apiserver-test-preload-250630" in "kube-system" namespace to be "Ready" ...
	I0127 11:55:50.914784  434272 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-250630" in "kube-system" namespace to be "Ready" ...
	I0127 11:55:50.920254  434272 pod_ready.go:93] pod "kube-controller-manager-test-preload-250630" in "kube-system" namespace has status "Ready":"True"
	I0127 11:55:50.920281  434272 pod_ready.go:82] duration metric: took 5.487406ms for pod "kube-controller-manager-test-preload-250630" in "kube-system" namespace to be "Ready" ...
	I0127 11:55:50.920293  434272 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fkkqm" in "kube-system" namespace to be "Ready" ...
	I0127 11:55:51.294015  434272 pod_ready.go:93] pod "kube-proxy-fkkqm" in "kube-system" namespace has status "Ready":"True"
	I0127 11:55:51.294043  434272 pod_ready.go:82] duration metric: took 373.723183ms for pod "kube-proxy-fkkqm" in "kube-system" namespace to be "Ready" ...
	I0127 11:55:51.294055  434272 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-250630" in "kube-system" namespace to be "Ready" ...
	I0127 11:55:51.694054  434272 pod_ready.go:93] pod "kube-scheduler-test-preload-250630" in "kube-system" namespace has status "Ready":"True"
	I0127 11:55:51.694081  434272 pod_ready.go:82] duration metric: took 400.017907ms for pod "kube-scheduler-test-preload-250630" in "kube-system" namespace to be "Ready" ...
	I0127 11:55:51.694095  434272 pod_ready.go:39] duration metric: took 2.815486217s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:55:51.694133  434272 api_server.go:52] waiting for apiserver process to appear ...
	I0127 11:55:51.694208  434272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:55:51.705685  434272 api_server.go:72] duration metric: took 20.592115465s to wait for apiserver process to appear ...
	I0127 11:55:51.705719  434272 api_server.go:88] waiting for apiserver healthz status ...
	I0127 11:55:51.705757  434272 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0127 11:55:51.714317  434272 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0127 11:55:51.715297  434272 api_server.go:141] control plane version: v1.24.4
	I0127 11:55:51.715326  434272 api_server.go:131] duration metric: took 9.600253ms to wait for apiserver health ...
	I0127 11:55:51.715336  434272 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 11:55:51.897586  434272 system_pods.go:59] 9 kube-system pods found
	I0127 11:55:51.897623  434272 system_pods.go:61] "coredns-6d4b75cb6d-hgtpg" [3f3183bc-8ab4-4d96-af53-11e6d4a92b33] Running
	I0127 11:55:51.897630  434272 system_pods.go:61] "coredns-6d4b75cb6d-zg4sc" [089aa46b-565c-4c28-ab5c-ee8612cbd71e] Running
	I0127 11:55:51.897635  434272 system_pods.go:61] "etcd-test-preload-250630" [6b3aff6f-215c-49c0-9348-e8641959e130] Running
	I0127 11:55:51.897640  434272 system_pods.go:61] "kindnet-rljhx" [c82c2d68-4fa6-4bc5-8977-4307e520134d] Running
	I0127 11:55:51.897644  434272 system_pods.go:61] "kube-apiserver-test-preload-250630" [20bcd548-8694-43be-8904-1aab8d64581f] Running
	I0127 11:55:51.897649  434272 system_pods.go:61] "kube-controller-manager-test-preload-250630" [8cda4b66-c9e0-4f60-8f09-e1c0b4b15aa4] Running
	I0127 11:55:51.897653  434272 system_pods.go:61] "kube-proxy-fkkqm" [d22937ef-3dbc-44b6-8694-bb29ffede6a1] Running
	I0127 11:55:51.897657  434272 system_pods.go:61] "kube-scheduler-test-preload-250630" [702bfcac-8519-4bd7-a5ce-627392f3a087] Running
	I0127 11:55:51.897666  434272 system_pods.go:61] "storage-provisioner" [3bf73def-3502-4824-b94e-3272ddc86c8e] Running
	I0127 11:55:51.897677  434272 system_pods.go:74] duration metric: took 182.335704ms to wait for pod list to return data ...
	I0127 11:55:51.897688  434272 default_sa.go:34] waiting for default service account to be created ...
	I0127 11:55:52.093707  434272 default_sa.go:45] found service account: "default"
	I0127 11:55:52.093738  434272 default_sa.go:55] duration metric: took 196.043167ms for default service account to be created ...
	I0127 11:55:52.093750  434272 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 11:55:52.297703  434272 system_pods.go:87] 9 kube-system pods found

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-linux-arm64 start -p test-preload-250630 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4 failed: signal: killed
panic.go:629: *** TestPreload FAILED at 2025-01-27 12:34:14.309185893 +0000 UTC m=+4593.256869918
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPreload]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect test-preload-250630
helpers_test.go:235: (dbg) docker inspect test-preload-250630:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c1511eb3d78419bda886ac0507f39e65f31453337bb684bd2446949958752667",
	        "Created": "2025-01-27T11:54:15.71674355Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 434678,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-01-27T11:54:16.000350643Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0434cf58b6dbace281e5de753aa4b2e3fe33dc9a3be53021531403743c3f155a",
	        "ResolvConfPath": "/var/lib/docker/containers/c1511eb3d78419bda886ac0507f39e65f31453337bb684bd2446949958752667/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c1511eb3d78419bda886ac0507f39e65f31453337bb684bd2446949958752667/hostname",
	        "HostsPath": "/var/lib/docker/containers/c1511eb3d78419bda886ac0507f39e65f31453337bb684bd2446949958752667/hosts",
	        "LogPath": "/var/lib/docker/containers/c1511eb3d78419bda886ac0507f39e65f31453337bb684bd2446949958752667/c1511eb3d78419bda886ac0507f39e65f31453337bb684bd2446949958752667-json.log",
	        "Name": "/test-preload-250630",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "test-preload-250630:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "test-preload-250630",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/12880dff9d34a76998ee5784d4a3dd0eda3a12cd61d40e8752f30071cb538121-init/diff:/var/lib/docker/overlay2/f9679fb4b68b50924b42b41bb8163a036f86217b5bdb257ff1bd6b1d4c169198/diff",
	                "MergedDir": "/var/lib/docker/overlay2/12880dff9d34a76998ee5784d4a3dd0eda3a12cd61d40e8752f30071cb538121/merged",
	                "UpperDir": "/var/lib/docker/overlay2/12880dff9d34a76998ee5784d4a3dd0eda3a12cd61d40e8752f30071cb538121/diff",
	                "WorkDir": "/var/lib/docker/overlay2/12880dff9d34a76998ee5784d4a3dd0eda3a12cd61d40e8752f30071cb538121/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "test-preload-250630",
	                "Source": "/var/lib/docker/volumes/test-preload-250630/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "test-preload-250630",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "test-preload-250630",
	                "name.minikube.sigs.k8s.io": "test-preload-250630",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6e13aab122202c4cf9c1f9da4ad138c0fed59b050210e54c7aa09bfcb98ff1b3",
	            "SandboxKey": "/var/run/docker/netns/6e13aab12220",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33323"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33324"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33327"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33325"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33326"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "test-preload-250630": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "83c0f26c3ee253f5481f708a40c0f1d755e63fa0b9774e593904d45fa1305b66",
	                    "EndpointID": "935258adb4766f6afe540b047bc3149d493473a21596295472dbde2c13badb22",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "test-preload-250630",
	                        "c1511eb3d784"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p test-preload-250630 -n test-preload-250630
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-250630 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p test-preload-250630 logs -n 25: (1.321833626s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-868030 ssh -n                                                                 | multinode-868030     | jenkins | v1.35.0 | 27 Jan 25 11:50 UTC | 27 Jan 25 11:50 UTC |
	|         | multinode-868030-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| cp      | multinode-868030 cp multinode-868030-m03:/home/docker/cp-test.txt                       | multinode-868030     | jenkins | v1.35.0 | 27 Jan 25 11:50 UTC | 27 Jan 25 11:50 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile539082254/001/cp-test_multinode-868030-m03.txt          |                      |         |         |                     |                     |
	| ssh     | multinode-868030 ssh -n                                                                 | multinode-868030     | jenkins | v1.35.0 | 27 Jan 25 11:50 UTC | 27 Jan 25 11:50 UTC |
	|         | multinode-868030-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| cp      | multinode-868030 cp multinode-868030-m03:/home/docker/cp-test.txt                       | multinode-868030     | jenkins | v1.35.0 | 27 Jan 25 11:50 UTC | 27 Jan 25 11:50 UTC |
	|         | multinode-868030:/home/docker/cp-test_multinode-868030-m03_multinode-868030.txt         |                      |         |         |                     |                     |
	| ssh     | multinode-868030 ssh -n                                                                 | multinode-868030     | jenkins | v1.35.0 | 27 Jan 25 11:50 UTC | 27 Jan 25 11:50 UTC |
	|         | multinode-868030-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-868030 ssh -n multinode-868030 sudo cat                                       | multinode-868030     | jenkins | v1.35.0 | 27 Jan 25 11:50 UTC | 27 Jan 25 11:50 UTC |
	|         | /home/docker/cp-test_multinode-868030-m03_multinode-868030.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-868030 cp multinode-868030-m03:/home/docker/cp-test.txt                       | multinode-868030     | jenkins | v1.35.0 | 27 Jan 25 11:50 UTC | 27 Jan 25 11:50 UTC |
	|         | multinode-868030-m02:/home/docker/cp-test_multinode-868030-m03_multinode-868030-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-868030 ssh -n                                                                 | multinode-868030     | jenkins | v1.35.0 | 27 Jan 25 11:50 UTC | 27 Jan 25 11:50 UTC |
	|         | multinode-868030-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-868030 ssh -n multinode-868030-m02 sudo cat                                   | multinode-868030     | jenkins | v1.35.0 | 27 Jan 25 11:50 UTC | 27 Jan 25 11:50 UTC |
	|         | /home/docker/cp-test_multinode-868030-m03_multinode-868030-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-868030 node stop m03                                                          | multinode-868030     | jenkins | v1.35.0 | 27 Jan 25 11:50 UTC | 27 Jan 25 11:50 UTC |
	| node    | multinode-868030 node start                                                             | multinode-868030     | jenkins | v1.35.0 | 27 Jan 25 11:50 UTC | 27 Jan 25 11:50 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-868030                                                                | multinode-868030     | jenkins | v1.35.0 | 27 Jan 25 11:50 UTC |                     |
	| stop    | -p multinode-868030                                                                     | multinode-868030     | jenkins | v1.35.0 | 27 Jan 25 11:50 UTC | 27 Jan 25 11:51 UTC |
	| start   | -p multinode-868030                                                                     | multinode-868030     | jenkins | v1.35.0 | 27 Jan 25 11:51 UTC | 27 Jan 25 11:52 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-868030                                                                | multinode-868030     | jenkins | v1.35.0 | 27 Jan 25 11:52 UTC |                     |
	| node    | multinode-868030 node delete                                                            | multinode-868030     | jenkins | v1.35.0 | 27 Jan 25 11:52 UTC | 27 Jan 25 11:52 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-868030 stop                                                                   | multinode-868030     | jenkins | v1.35.0 | 27 Jan 25 11:52 UTC | 27 Jan 25 11:52 UTC |
	| start   | -p multinode-868030                                                                     | multinode-868030     | jenkins | v1.35.0 | 27 Jan 25 11:52 UTC | 27 Jan 25 11:53 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=docker                                                                         |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-868030                                                                | multinode-868030     | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC |                     |
	| start   | -p multinode-868030-m02                                                                 | multinode-868030-m02 | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC |                     |
	|         | --driver=docker                                                                         |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-868030-m03                                                                 | multinode-868030-m03 | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:54 UTC |
	|         | --driver=docker                                                                         |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-868030                                                                 | multinode-868030     | jenkins | v1.35.0 | 27 Jan 25 11:54 UTC |                     |
	| delete  | -p multinode-868030-m03                                                                 | multinode-868030-m03 | jenkins | v1.35.0 | 27 Jan 25 11:54 UTC | 27 Jan 25 11:54 UTC |
	| delete  | -p multinode-868030                                                                     | multinode-868030     | jenkins | v1.35.0 | 27 Jan 25 11:54 UTC | 27 Jan 25 11:54 UTC |
	| start   | -p test-preload-250630                                                                  | test-preload-250630  | jenkins | v1.35.0 | 27 Jan 25 11:54 UTC |                     |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --wait=true --preload=false                                                             |                      |         |         |                     |                     |
	|         | --driver=docker                                                                         |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 11:54:14
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 11:54:14.294778  434272 out.go:345] Setting OutFile to fd 1 ...
	I0127 11:54:14.295007  434272 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:54:14.295017  434272 out.go:358] Setting ErrFile to fd 2...
	I0127 11:54:14.295022  434272 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:54:14.295314  434272 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-300538/.minikube/bin
	I0127 11:54:14.295756  434272 out.go:352] Setting JSON to false
	I0127 11:54:14.296698  434272 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":9402,"bootTime":1737969453,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0127 11:54:14.296770  434272 start.go:139] virtualization:  
	I0127 11:54:14.301032  434272 out.go:177] * [test-preload-250630] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0127 11:54:14.305582  434272 out.go:177]   - MINIKUBE_LOCATION=20319
	I0127 11:54:14.305785  434272 notify.go:220] Checking for updates...
	I0127 11:54:14.312445  434272 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 11:54:14.315673  434272 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20319-300538/kubeconfig
	I0127 11:54:14.319130  434272 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-300538/.minikube
	I0127 11:54:14.322376  434272 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0127 11:54:14.325612  434272 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 11:54:14.329024  434272 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 11:54:14.361704  434272 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0127 11:54:14.361819  434272 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 11:54:14.419504  434272 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:43 SystemTime:2025-01-27 11:54:14.41013069 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 11:54:14.419618  434272 docker.go:318] overlay module found
	I0127 11:54:14.424720  434272 out.go:177] * Using the docker driver based on user configuration
	I0127 11:54:14.427605  434272 start.go:297] selected driver: docker
	I0127 11:54:14.427634  434272 start.go:901] validating driver "docker" against <nil>
	I0127 11:54:14.427649  434272 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 11:54:14.428397  434272 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 11:54:14.484932  434272 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:43 SystemTime:2025-01-27 11:54:14.476031592 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 11:54:14.485188  434272 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 11:54:14.485450  434272 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 11:54:14.488352  434272 out.go:177] * Using Docker driver with root privileges
	I0127 11:54:14.491268  434272 cni.go:84] Creating CNI manager for ""
	I0127 11:54:14.491332  434272 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0127 11:54:14.491360  434272 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0127 11:54:14.491470  434272 start.go:340] cluster config:
	{Name:test-preload-250630 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-250630 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 G
PUs: AutoPauseInterval:1m0s}
	I0127 11:54:14.494565  434272 out.go:177] * Starting "test-preload-250630" primary control-plane node in "test-preload-250630" cluster
	I0127 11:54:14.497436  434272 cache.go:121] Beginning downloading kic base image for docker with crio
	I0127 11:54:14.500299  434272 out.go:177] * Pulling base image v0.0.46 ...
	I0127 11:54:14.503167  434272 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0127 11:54:14.503259  434272 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I0127 11:54:14.503577  434272 profile.go:143] Saving config to /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/test-preload-250630/config.json ...
	I0127 11:54:14.503618  434272 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/test-preload-250630/config.json: {Name:mk57dd6cf3c53c6ada1352fee437504d65222850 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:54:14.503895  434272 cache.go:107] acquiring lock: {Name:mk9692d47e200f11d4993f236bd01b0a253b91b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:54:14.504050  434272 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:54:14.504442  434272 cache.go:107] acquiring lock: {Name:mk190b5e35ee09ef09980447d286561c72c39a13 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:54:14.504618  434272 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0127 11:54:14.504880  434272 cache.go:107] acquiring lock: {Name:mkc069e0a063b6e8c91c2f43ee1592d05b5686fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:54:14.505014  434272 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0127 11:54:14.505261  434272 cache.go:107] acquiring lock: {Name:mkb9ee01f6fbb348522e3d1ad78b1802312feffa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:54:14.505385  434272 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0127 11:54:14.505642  434272 cache.go:107] acquiring lock: {Name:mkc5535169ee61955686542402fc72777a25f235 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:54:14.505763  434272 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0127 11:54:14.505973  434272 cache.go:107] acquiring lock: {Name:mk09db466f53eb520fd7ab5dc91e364b760e662e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:54:14.506094  434272 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0127 11:54:14.506314  434272 cache.go:107] acquiring lock: {Name:mk7b4b2fb26cbf6e45f6b6c1738d7a2530518e19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:54:14.506427  434272 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0127 11:54:14.506669  434272 cache.go:107] acquiring lock: {Name:mkf29f668ed0239f52497c64b0fe1330729cd339 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:54:14.506836  434272 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0127 11:54:14.509198  434272 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0127 11:54:14.509605  434272 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0127 11:54:14.509787  434272 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0127 11:54:14.509963  434272 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:54:14.510754  434272 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0127 11:54:14.510952  434272 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0127 11:54:14.511114  434272 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0127 11:54:14.511262  434272 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0127 11:54:14.529390  434272 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon, skipping pull
	I0127 11:54:14.529415  434272 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in daemon, skipping load
	I0127 11:54:14.529434  434272 cache.go:227] Successfully downloaded all kic artifacts
	I0127 11:54:14.529477  434272 start.go:360] acquireMachinesLock for test-preload-250630: {Name:mk87d4fcbdbe0721e27b7c7e4174fc1b79e5479a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:54:14.529610  434272 start.go:364] duration metric: took 112.525µs to acquireMachinesLock for "test-preload-250630"
	I0127 11:54:14.529644  434272 start.go:93] Provisioning new machine with config: &{Name:test-preload-250630 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-250630 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 11:54:14.529726  434272 start.go:125] createHost starting for "" (driver="docker")
	I0127 11:54:14.533631  434272 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0127 11:54:14.533901  434272 start.go:159] libmachine.API.Create for "test-preload-250630" (driver="docker")
	I0127 11:54:14.533940  434272 client.go:168] LocalClient.Create starting
	I0127 11:54:14.534010  434272 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20319-300538/.minikube/certs/ca.pem
	I0127 11:54:14.534052  434272 main.go:141] libmachine: Decoding PEM data...
	I0127 11:54:14.534081  434272 main.go:141] libmachine: Parsing certificate...
	I0127 11:54:14.534141  434272 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20319-300538/.minikube/certs/cert.pem
	I0127 11:54:14.534165  434272 main.go:141] libmachine: Decoding PEM data...
	I0127 11:54:14.534175  434272 main.go:141] libmachine: Parsing certificate...
	I0127 11:54:14.534543  434272 cli_runner.go:164] Run: docker network inspect test-preload-250630 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0127 11:54:14.565871  434272 cli_runner.go:211] docker network inspect test-preload-250630 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0127 11:54:14.565962  434272 network_create.go:284] running [docker network inspect test-preload-250630] to gather additional debugging logs...
	I0127 11:54:14.565982  434272 cli_runner.go:164] Run: docker network inspect test-preload-250630
	W0127 11:54:14.585049  434272 cli_runner.go:211] docker network inspect test-preload-250630 returned with exit code 1
	I0127 11:54:14.585096  434272 network_create.go:287] error running [docker network inspect test-preload-250630]: docker network inspect test-preload-250630: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network test-preload-250630 not found
	I0127 11:54:14.585115  434272 network_create.go:289] output of [docker network inspect test-preload-250630]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network test-preload-250630 not found
	
	** /stderr **
	I0127 11:54:14.585223  434272 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0127 11:54:14.600477  434272 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-83a41a4be89e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:bb:86:ff:d6} reservation:<nil>}
	I0127 11:54:14.600866  434272 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b8647f61e26c IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:4f:9a:96:61} reservation:<nil>}
	I0127 11:54:14.601141  434272 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8a54f92038ad IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:21:b4:54:50} reservation:<nil>}
	I0127 11:54:14.601544  434272 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001c2d1a0}
	I0127 11:54:14.601571  434272 network_create.go:124] attempt to create docker network test-preload-250630 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0127 11:54:14.601625  434272 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=test-preload-250630 test-preload-250630
	I0127 11:54:14.688668  434272 network_create.go:108] docker network test-preload-250630 192.168.76.0/24 created
	I0127 11:54:14.688709  434272 kic.go:121] calculated static IP "192.168.76.2" for the "test-preload-250630" container
	I0127 11:54:14.688826  434272 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0127 11:54:14.705612  434272 cli_runner.go:164] Run: docker volume create test-preload-250630 --label name.minikube.sigs.k8s.io=test-preload-250630 --label created_by.minikube.sigs.k8s.io=true
	I0127 11:54:14.724647  434272 oci.go:103] Successfully created a docker volume test-preload-250630
	I0127 11:54:14.724744  434272 cli_runner.go:164] Run: docker run --rm --name test-preload-250630-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=test-preload-250630 --entrypoint /usr/bin/test -v test-preload-250630:/var gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -d /var/lib
	I0127 11:54:15.001333  434272 cache.go:162] opening:  /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	W0127 11:54:15.063796  434272 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0127 11:54:15.063870  434272 cache.go:162] opening:  /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0127 11:54:15.074273  434272 cache.go:157] /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0127 11:54:15.074306  434272 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 568.335823ms
	I0127 11:54:15.074328  434272 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	I0127 11:54:15.076368  434272 cache.go:162] opening:  /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0127 11:54:15.076912  434272 cache.go:162] opening:  /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0127 11:54:15.079259  434272 cache.go:162] opening:  /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0127 11:54:15.082384  434272 cache.go:162] opening:  /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0127 11:54:15.099420  434272 cache.go:162] opening:  /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0127 11:54:15.280426  434272 cache.go:157] /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0127 11:54:15.280527  434272 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 773.859946ms
	I0127 11:54:15.280591  434272 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0127 11:54:15.411788  434272 cache.go:157] /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0127 11:54:15.411882  434272 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 906.62328ms
	I0127 11:54:15.411939  434272 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0127 11:54:15.567629  434272 oci.go:107] Successfully prepared a docker volume test-preload-250630
	I0127 11:54:15.567666  434272 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	W0127 11:54:15.567811  434272 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0127 11:54:15.567938  434272 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0127 11:54:15.584968  434272 cache.go:157] /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0127 11:54:15.584996  434272 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 1.080120921s
	I0127 11:54:15.585009  434272 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0127 11:54:15.605215  434272 cache.go:157] /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0127 11:54:15.605249  434272 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 1.100812044s
	I0127 11:54:15.605264  434272 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0127 11:54:15.632043  434272 cache.go:157] /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0127 11:54:15.632245  434272 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 1.126595088s
	I0127 11:54:15.632301  434272 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0127 11:54:15.694010  434272 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname test-preload-250630 --name test-preload-250630 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=test-preload-250630 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=test-preload-250630 --network test-preload-250630 --ip 192.168.76.2 --volume test-preload-250630:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279
	W0127 11:54:15.709245  434272 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0127 11:54:15.709303  434272 cache.go:162] opening:  /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0127 11:54:16.209607  434272 cli_runner.go:164] Run: docker container inspect test-preload-250630 --format={{.State.Running}}
	I0127 11:54:16.242121  434272 cli_runner.go:164] Run: docker container inspect test-preload-250630 --format={{.State.Status}}
	I0127 11:54:16.299591  434272 cli_runner.go:164] Run: docker exec test-preload-250630 stat /var/lib/dpkg/alternatives/iptables
	I0127 11:54:16.332092  434272 cache.go:157] /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0127 11:54:16.332120  434272 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.828227157s
	I0127 11:54:16.332133  434272 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0127 11:54:16.382541  434272 oci.go:144] the created container "test-preload-250630" has a running status.
	I0127 11:54:16.382580  434272 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20319-300538/.minikube/machines/test-preload-250630/id_rsa...
	I0127 11:54:16.392769  434272 cache.go:157] /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0127 11:54:16.392797  434272 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 1.886491073s
	I0127 11:54:16.392809  434272 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0127 11:54:16.392820  434272 cache.go:87] Successfully saved all images to host disk.
	I0127 11:54:16.625391  434272 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20319-300538/.minikube/machines/test-preload-250630/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0127 11:54:16.655186  434272 cli_runner.go:164] Run: docker container inspect test-preload-250630 --format={{.State.Status}}
	I0127 11:54:16.681000  434272 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0127 11:54:16.681024  434272 kic_runner.go:114] Args: [docker exec --privileged test-preload-250630 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0127 11:54:16.738178  434272 cli_runner.go:164] Run: docker container inspect test-preload-250630 --format={{.State.Status}}
	I0127 11:54:16.762899  434272 machine.go:93] provisionDockerMachine start ...
	I0127 11:54:16.762997  434272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-250630
	I0127 11:54:16.790546  434272 main.go:141] libmachine: Using SSH client type: native
	I0127 11:54:16.790822  434272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 33323 <nil> <nil>}
	I0127 11:54:16.790834  434272 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 11:54:16.791509  434272 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34796->127.0.0.1:33323: read: connection reset by peer
	I0127 11:54:19.914655  434272 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-250630
	
	I0127 11:54:19.914723  434272 ubuntu.go:169] provisioning hostname "test-preload-250630"
	I0127 11:54:19.914823  434272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-250630
	I0127 11:54:19.932544  434272 main.go:141] libmachine: Using SSH client type: native
	I0127 11:54:19.932801  434272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 33323 <nil> <nil>}
	I0127 11:54:19.932819  434272 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-250630 && echo "test-preload-250630" | sudo tee /etc/hostname
	I0127 11:54:20.080621  434272 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-250630
	
	I0127 11:54:20.080751  434272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-250630
	I0127 11:54:20.101062  434272 main.go:141] libmachine: Using SSH client type: native
	I0127 11:54:20.101325  434272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 33323 <nil> <nil>}
	I0127 11:54:20.101351  434272 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-250630' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-250630/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-250630' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 11:54:20.227222  434272 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 11:54:20.227252  434272 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20319-300538/.minikube CaCertPath:/home/jenkins/minikube-integration/20319-300538/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20319-300538/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20319-300538/.minikube}
	I0127 11:54:20.227272  434272 ubuntu.go:177] setting up certificates
	I0127 11:54:20.227282  434272 provision.go:84] configureAuth start
	I0127 11:54:20.227361  434272 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-250630
	I0127 11:54:20.245308  434272 provision.go:143] copyHostCerts
	I0127 11:54:20.245390  434272 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-300538/.minikube/key.pem, removing ...
	I0127 11:54:20.245403  434272 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-300538/.minikube/key.pem
	I0127 11:54:20.245479  434272 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-300538/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20319-300538/.minikube/key.pem (1679 bytes)
	I0127 11:54:20.245573  434272 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-300538/.minikube/ca.pem, removing ...
	I0127 11:54:20.245582  434272 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-300538/.minikube/ca.pem
	I0127 11:54:20.245608  434272 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-300538/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20319-300538/.minikube/ca.pem (1082 bytes)
	I0127 11:54:20.245665  434272 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-300538/.minikube/cert.pem, removing ...
	I0127 11:54:20.245673  434272 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-300538/.minikube/cert.pem
	I0127 11:54:20.245696  434272 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-300538/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20319-300538/.minikube/cert.pem (1123 bytes)
	I0127 11:54:20.245755  434272 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20319-300538/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20319-300538/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20319-300538/.minikube/certs/ca-key.pem org=jenkins.test-preload-250630 san=[127.0.0.1 192.168.76.2 localhost minikube test-preload-250630]
	I0127 11:54:21.089137  434272 provision.go:177] copyRemoteCerts
	I0127 11:54:21.089238  434272 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 11:54:21.089289  434272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-250630
	I0127 11:54:21.106780  434272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33323 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/test-preload-250630/id_rsa Username:docker}
	I0127 11:54:21.195921  434272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0127 11:54:21.220806  434272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0127 11:54:21.245131  434272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 11:54:21.269331  434272 provision.go:87] duration metric: took 1.04203529s to configureAuth
	I0127 11:54:21.269357  434272 ubuntu.go:193] setting minikube options for container-runtime
	I0127 11:54:21.269540  434272 config.go:182] Loaded profile config "test-preload-250630": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0127 11:54:21.269648  434272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-250630
	I0127 11:54:21.286758  434272 main.go:141] libmachine: Using SSH client type: native
	I0127 11:54:21.286996  434272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 33323 <nil> <nil>}
	I0127 11:54:21.287018  434272 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 11:54:21.514503  434272 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 11:54:21.514532  434272 machine.go:96] duration metric: took 4.751605663s to provisionDockerMachine
	I0127 11:54:21.514543  434272 client.go:171] duration metric: took 6.980588018s to LocalClient.Create
	I0127 11:54:21.514556  434272 start.go:167] duration metric: took 6.980655382s to libmachine.API.Create "test-preload-250630"
	I0127 11:54:21.514564  434272 start.go:293] postStartSetup for "test-preload-250630" (driver="docker")
	I0127 11:54:21.514575  434272 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 11:54:21.514641  434272 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 11:54:21.514687  434272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-250630
	I0127 11:54:21.531456  434272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33323 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/test-preload-250630/id_rsa Username:docker}
	I0127 11:54:21.620621  434272 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 11:54:21.623840  434272 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0127 11:54:21.623879  434272 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0127 11:54:21.623890  434272 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0127 11:54:21.623898  434272 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0127 11:54:21.623909  434272 filesync.go:126] Scanning /home/jenkins/minikube-integration/20319-300538/.minikube/addons for local assets ...
	I0127 11:54:21.623975  434272 filesync.go:126] Scanning /home/jenkins/minikube-integration/20319-300538/.minikube/files for local assets ...
	I0127 11:54:21.624059  434272 filesync.go:149] local asset: /home/jenkins/minikube-integration/20319-300538/.minikube/files/etc/ssl/certs/3059362.pem -> 3059362.pem in /etc/ssl/certs
	I0127 11:54:21.624166  434272 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 11:54:21.632450  434272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/files/etc/ssl/certs/3059362.pem --> /etc/ssl/certs/3059362.pem (1708 bytes)
	I0127 11:54:21.656764  434272 start.go:296] duration metric: took 142.183943ms for postStartSetup
	I0127 11:54:21.657131  434272 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-250630
	I0127 11:54:21.678254  434272 profile.go:143] Saving config to /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/test-preload-250630/config.json ...
	I0127 11:54:21.678545  434272 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 11:54:21.678599  434272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-250630
	I0127 11:54:21.695997  434272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33323 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/test-preload-250630/id_rsa Username:docker}
	I0127 11:54:21.779835  434272 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0127 11:54:21.784289  434272 start.go:128] duration metric: took 7.254549085s to createHost
	I0127 11:54:21.784312  434272 start.go:83] releasing machines lock for "test-preload-250630", held for 7.254686816s
	I0127 11:54:21.784387  434272 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-250630
	I0127 11:54:21.801300  434272 ssh_runner.go:195] Run: cat /version.json
	I0127 11:54:21.801364  434272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-250630
	I0127 11:54:21.801617  434272 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 11:54:21.801676  434272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-250630
	I0127 11:54:21.829895  434272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33323 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/test-preload-250630/id_rsa Username:docker}
	I0127 11:54:21.840687  434272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33323 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/test-preload-250630/id_rsa Username:docker}
	I0127 11:54:21.914567  434272 ssh_runner.go:195] Run: systemctl --version
	I0127 11:54:22.051213  434272 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 11:54:22.192014  434272 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0127 11:54:22.196266  434272 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 11:54:22.220232  434272 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0127 11:54:22.220363  434272 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 11:54:22.253861  434272 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0127 11:54:22.253921  434272 start.go:495] detecting cgroup driver to use...
	I0127 11:54:22.253970  434272 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0127 11:54:22.254041  434272 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 11:54:22.270169  434272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 11:54:22.282609  434272 docker.go:217] disabling cri-docker service (if available) ...
	I0127 11:54:22.282713  434272 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 11:54:22.296604  434272 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 11:54:22.310967  434272 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 11:54:22.399562  434272 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 11:54:22.491967  434272 docker.go:233] disabling docker service ...
	I0127 11:54:22.492043  434272 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 11:54:22.514297  434272 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 11:54:22.527228  434272 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 11:54:22.616182  434272 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 11:54:22.717952  434272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 11:54:22.729182  434272 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 11:54:22.745808  434272 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0127 11:54:22.745922  434272 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:54:22.756612  434272 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 11:54:22.756725  434272 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:54:22.767861  434272 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:54:22.778681  434272 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:54:22.789716  434272 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 11:54:22.799600  434272 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:54:22.810176  434272 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:54:22.826826  434272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 11:54:22.837276  434272 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 11:54:22.845784  434272 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 11:54:22.854498  434272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:54:22.944430  434272 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 11:54:23.057810  434272 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 11:54:23.057968  434272 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 11:54:23.061928  434272 start.go:563] Will wait 60s for crictl version
	I0127 11:54:23.062064  434272 ssh_runner.go:195] Run: which crictl
	I0127 11:54:23.065779  434272 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 11:54:23.105119  434272 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0127 11:54:23.105212  434272 ssh_runner.go:195] Run: crio --version
	I0127 11:54:23.144687  434272 ssh_runner.go:195] Run: crio --version
	I0127 11:54:23.188431  434272 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.24.6 ...
	I0127 11:54:23.191397  434272 cli_runner.go:164] Run: docker network inspect test-preload-250630 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0127 11:54:23.208019  434272 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0127 11:54:23.211828  434272 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 11:54:23.222783  434272 kubeadm.go:883] updating cluster {Name:test-preload-250630 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-250630 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 11:54:23.222899  434272 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0127 11:54:23.222946  434272 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 11:54:23.258229  434272 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0127 11:54:23.258258  434272 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0127 11:54:23.258302  434272 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:54:23.258331  434272 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0127 11:54:23.258511  434272 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0127 11:54:23.258523  434272 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0127 11:54:23.258601  434272 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0127 11:54:23.258610  434272 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0127 11:54:23.258681  434272 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0127 11:54:23.258512  434272 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0127 11:54:23.261142  434272 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0127 11:54:23.261194  434272 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0127 11:54:23.261248  434272 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0127 11:54:23.261142  434272 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0127 11:54:23.261392  434272 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0127 11:54:23.261522  434272 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0127 11:54:23.261587  434272 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:54:23.261640  434272 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0127 11:54:23.630402  434272 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0127 11:54:23.669672  434272 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0127 11:54:23.669710  434272 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0127 11:54:23.669765  434272 ssh_runner.go:195] Run: which crictl
	I0127 11:54:23.673598  434272 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0127 11:54:23.712902  434272 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0127 11:54:23.716759  434272 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0127 11:54:23.722340  434272 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0127 11:54:23.723348  434272 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0127 11:54:23.730803  434272 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0127 11:54:23.732391  434272 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	W0127 11:54:23.734358  434272 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0127 11:54:23.734552  434272 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0127 11:54:23.771335  434272 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0127 11:54:23.854883  434272 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "bd8cc6d58247078a865774b7f516f8afc3ac8cd080fd49650ca30ef2fbc6ebd1" in container runtime
	I0127 11:54:23.854936  434272 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0127 11:54:23.854987  434272 ssh_runner.go:195] Run: which crictl
	I0127 11:54:23.894695  434272 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0127 11:54:23.894739  434272 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0127 11:54:23.894788  434272 ssh_runner.go:195] Run: which crictl
	I0127 11:54:23.894858  434272 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "5753e4610b3ec0ac100c3535b8d8a7507b3d031148e168c2c3c4b0f389976074" in container runtime
	I0127 11:54:23.894878  434272 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0127 11:54:23.894916  434272 ssh_runner.go:195] Run: which crictl
	I0127 11:54:23.925926  434272 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "3767741e7fba72f328a8500a18ef34481343eb78697e31ae5bf3e390a28317ae" in container runtime
	I0127 11:54:23.925969  434272 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0127 11:54:23.926023  434272 ssh_runner.go:195] Run: which crictl
	I0127 11:54:23.931918  434272 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "81a4a8a4ac639bdd7e118359417a80cab1a0d0e4737eb735714cf7f8b15dc0c7" in container runtime
	I0127 11:54:23.931964  434272 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0127 11:54:23.932014  434272 ssh_runner.go:195] Run: which crictl
	I0127 11:54:23.932099  434272 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0127 11:54:23.932117  434272 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0127 11:54:23.932140  434272 ssh_runner.go:195] Run: which crictl
	I0127 11:54:23.940483  434272 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0127 11:54:23.940558  434272 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0127 11:54:23.940627  434272 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0127 11:54:23.940695  434272 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0127 11:54:23.940742  434272 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0127 11:54:23.940792  434272 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0127 11:54:23.941858  434272 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0127 11:54:23.941921  434272 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0127 11:54:24.090184  434272 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0127 11:54:24.090262  434272 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0127 11:54:24.090283  434272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0127 11:54:24.090362  434272 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0127 11:54:24.090448  434272 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0127 11:54:24.090528  434272 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0127 11:54:24.090614  434272 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0127 11:54:24.090687  434272 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0127 11:54:24.289053  434272 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0127 11:54:24.289147  434272 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0127 11:54:24.289206  434272 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0127 11:54:24.289267  434272 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0127 11:54:24.289324  434272 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0127 11:54:24.289431  434272 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	W0127 11:54:24.293272  434272 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0127 11:54:24.293562  434272 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:54:24.500207  434272 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0127 11:54:24.500369  434272 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0127 11:54:24.500487  434272 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0127 11:54:24.500596  434272 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0127 11:54:24.500718  434272 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0127 11:54:24.500799  434272 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0127 11:54:24.500881  434272 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0127 11:54:24.500960  434272 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0127 11:54:24.501046  434272 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0127 11:54:24.501118  434272 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0127 11:54:24.501203  434272 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0127 11:54:24.501291  434272 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0127 11:54:24.501380  434272 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0127 11:54:24.501428  434272 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:54:24.501476  434272 ssh_runner.go:195] Run: which crictl
	W0127 11:54:24.510627  434272 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0127 11:54:24.510713  434272 retry.go:31] will retry after 170.997262ms: ssh: rejected: connect failed (open failed)
	I0127 11:54:24.550782  434272 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0127 11:54:24.550824  434272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0127 11:54:24.550881  434272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-250630
	I0127 11:54:24.551113  434272 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0127 11:54:24.551136  434272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0127 11:54:24.551175  434272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-250630
	I0127 11:54:24.551461  434272 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.24.4: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.24.4': No such file or directory
	I0127 11:54:24.551488  434272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 --> /var/lib/minikube/images/kube-apiserver_v1.24.4 (30873088 bytes)
	I0127 11:54:24.551528  434272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-250630
	I0127 11:54:24.551977  434272 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.24.4: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.24.4': No such file or directory
	I0127 11:54:24.552005  434272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 --> /var/lib/minikube/images/kube-controller-manager_v1.24.4 (28246528 bytes)
	I0127 11:54:24.552047  434272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-250630
	I0127 11:54:24.555903  434272 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.24.4: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.24.4': No such file or directory
	I0127 11:54:24.555952  434272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 --> /var/lib/minikube/images/kube-scheduler_v1.24.4 (14094336 bytes)
	I0127 11:54:24.556007  434272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-250630
	I0127 11:54:24.566766  434272 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.24.4: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.24.4': No such file or directory
	I0127 11:54:24.566806  434272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 --> /var/lib/minikube/images/kube-proxy_v1.24.4 (38148096 bytes)
	I0127 11:54:24.566862  434272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-250630
	I0127 11:54:24.605922  434272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33323 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/test-preload-250630/id_rsa Username:docker}
	I0127 11:54:24.629621  434272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33323 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/test-preload-250630/id_rsa Username:docker}
	I0127 11:54:24.636169  434272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33323 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/test-preload-250630/id_rsa Username:docker}
	I0127 11:54:24.649116  434272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33323 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/test-preload-250630/id_rsa Username:docker}
	I0127 11:54:24.686358  434272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33323 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/test-preload-250630/id_rsa Username:docker}
	I0127 11:54:24.688491  434272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33323 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/test-preload-250630/id_rsa Username:docker}
	I0127 11:54:24.843139  434272 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:54:25.073048  434272 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0127 11:54:25.073116  434272 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0127 11:54:25.239364  434272 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:54:25.836872  434272 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:54:25.836908  434272 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0127 11:54:25.837087  434272 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0127 11:54:25.837136  434272 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0127 11:54:29.180782  434272 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (3.343619653s)
	I0127 11:54:29.180808  434272 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0127 11:54:29.180818  434272 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.343837719s)
	I0127 11:54:29.180857  434272 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0127 11:54:29.180826  434272 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0127 11:54:29.180946  434272 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0127 11:54:29.180949  434272 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0127 11:54:30.041376  434272 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0127 11:54:30.041421  434272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0127 11:54:30.041533  434272 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0127 11:54:30.041558  434272 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0127 11:54:30.041605  434272 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0127 11:54:31.148571  434272 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4: (1.106942565s)
	I0127 11:54:31.148606  434272 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0127 11:54:31.148627  434272 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0127 11:54:31.148675  434272 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0127 11:54:33.522521  434272 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4: (2.373819276s)
	I0127 11:54:33.522552  434272 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0127 11:54:33.522574  434272 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0127 11:54:33.522628  434272 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0127 11:54:35.281593  434272 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4: (1.758934925s)
	I0127 11:54:35.281625  434272 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0127 11:54:35.281648  434272 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0127 11:54:35.281700  434272 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0127 11:54:37.133562  434272 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4: (1.851833696s)
	I0127 11:54:37.133589  434272 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0127 11:54:37.133611  434272 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0127 11:54:37.133661  434272 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0127 11:54:37.686097  434272 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20319-300538/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0127 11:54:37.686134  434272 cache_images.go:123] Successfully loaded all cached images
	I0127 11:54:37.686140  434272 cache_images.go:92] duration metric: took 14.427868673s to LoadCachedImages
	I0127 11:54:37.686151  434272 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.24.4 crio true true} ...
	I0127 11:54:37.686246  434272 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=test-preload-250630 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-250630 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 11:54:37.686337  434272 ssh_runner.go:195] Run: crio config
	I0127 11:54:37.739114  434272 cni.go:84] Creating CNI manager for ""
	I0127 11:54:37.739139  434272 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0127 11:54:37.739151  434272 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 11:54:37.739175  434272 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-250630 NodeName:test-preload-250630 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 11:54:37.739313  434272 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-250630"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 11:54:37.739389  434272 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0127 11:54:37.748240  434272 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.24.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.24.4': No such file or directory
	
	Initiating transfer...
	I0127 11:54:37.748302  434272 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.24.4
	I0127 11:54:37.757168  434272 download.go:108] Downloading: https://dl.k8s.io/release/v1.24.4/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.24.4/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/20319-300538/.minikube/cache/linux/arm64/v1.24.4/kubectl
	I0127 11:54:37.757561  434272 download.go:108] Downloading: https://dl.k8s.io/release/v1.24.4/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.24.4/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/20319-300538/.minikube/cache/linux/arm64/v1.24.4/kubelet
	I0127 11:54:37.757730  434272 download.go:108] Downloading: https://dl.k8s.io/release/v1.24.4/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.24.4/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/20319-300538/.minikube/cache/linux/arm64/v1.24.4/kubeadm
	I0127 11:54:38.393755  434272 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.24.4/kubeadm
	I0127 11:54:38.399871  434272 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.24.4/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.24.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.24.4/kubeadm': No such file or directory
	I0127 11:54:38.399951  434272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/cache/linux/arm64/v1.24.4/kubeadm --> /var/lib/minikube/binaries/v1.24.4/kubeadm (43384832 bytes)
	I0127 11:54:38.551235  434272 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.24.4/kubectl
	I0127 11:54:38.582040  434272 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.24.4/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.24.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.24.4/kubectl': No such file or directory
	I0127 11:54:38.582145  434272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/cache/linux/arm64/v1.24.4/kubectl --> /var/lib/minikube/binaries/v1.24.4/kubectl (44564480 bytes)
	I0127 11:54:39.289524  434272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 11:54:39.302194  434272 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.24.4/kubelet
	I0127 11:54:39.305726  434272 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.24.4/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.24.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.24.4/kubelet': No such file or directory
	I0127 11:54:39.305762  434272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/cache/linux/arm64/v1.24.4/kubelet --> /var/lib/minikube/binaries/v1.24.4/kubelet (112477080 bytes)
	I0127 11:54:39.812956  434272 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 11:54:39.823588  434272 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0127 11:54:39.844492  434272 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 11:54:39.864630  434272 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I0127 11:54:39.883343  434272 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0127 11:54:39.886950  434272 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 11:54:39.897824  434272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:54:39.978879  434272 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 11:54:39.993042  434272 certs.go:68] Setting up /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/test-preload-250630 for IP: 192.168.76.2
	I0127 11:54:39.993066  434272 certs.go:194] generating shared ca certs ...
	I0127 11:54:39.993095  434272 certs.go:226] acquiring lock for ca certs: {Name:mk949cfe0d73736f3d2e354b486773524a8fcbe7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:54:39.993248  434272 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20319-300538/.minikube/ca.key
	I0127 11:54:39.993294  434272 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20319-300538/.minikube/proxy-client-ca.key
	I0127 11:54:39.993305  434272 certs.go:256] generating profile certs ...
	I0127 11:54:39.993375  434272 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/test-preload-250630/client.key
	I0127 11:54:39.993393  434272 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/test-preload-250630/client.crt with IP's: []
	I0127 11:54:40.208538  434272 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/test-preload-250630/client.crt ...
	I0127 11:54:40.208583  434272 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/test-preload-250630/client.crt: {Name:mkbe5f84f04fe2fb07110c5f88196f3897cee456 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:54:40.208831  434272 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/test-preload-250630/client.key ...
	I0127 11:54:40.208850  434272 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/test-preload-250630/client.key: {Name:mkafa36dd0ffb47e37684bef9e2739f2d5377e60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:54:40.208953  434272 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/test-preload-250630/apiserver.key.714fa92e
	I0127 11:54:40.208977  434272 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/test-preload-250630/apiserver.crt.714fa92e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0127 11:54:40.610197  434272 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/test-preload-250630/apiserver.crt.714fa92e ...
	I0127 11:54:40.610233  434272 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/test-preload-250630/apiserver.crt.714fa92e: {Name:mkbccbd20bbb8baa907eaab87a9a805a54d35e76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:54:40.610426  434272 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/test-preload-250630/apiserver.key.714fa92e ...
	I0127 11:54:40.610440  434272 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/test-preload-250630/apiserver.key.714fa92e: {Name:mk384ccec8960490a2a560ff304781b2ee8269b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:54:40.610528  434272 certs.go:381] copying /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/test-preload-250630/apiserver.crt.714fa92e -> /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/test-preload-250630/apiserver.crt
	I0127 11:54:40.610609  434272 certs.go:385] copying /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/test-preload-250630/apiserver.key.714fa92e -> /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/test-preload-250630/apiserver.key
	I0127 11:54:40.610672  434272 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/test-preload-250630/proxy-client.key
	I0127 11:54:40.610692  434272 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/test-preload-250630/proxy-client.crt with IP's: []
	I0127 11:54:41.329706  434272 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/test-preload-250630/proxy-client.crt ...
	I0127 11:54:41.329739  434272 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/test-preload-250630/proxy-client.crt: {Name:mk68ca5432b5b3e721d0cde1dd464db7453b1592 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:54:41.329929  434272 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/test-preload-250630/proxy-client.key ...
	I0127 11:54:41.329943  434272 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/test-preload-250630/proxy-client.key: {Name:mkd0785a978e80064f2312b070a14f97d0a0985c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:54:41.330131  434272 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-300538/.minikube/certs/305936.pem (1338 bytes)
	W0127 11:54:41.330179  434272 certs.go:480] ignoring /home/jenkins/minikube-integration/20319-300538/.minikube/certs/305936_empty.pem, impossibly tiny 0 bytes
	I0127 11:54:41.330194  434272 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-300538/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 11:54:41.330224  434272 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-300538/.minikube/certs/ca.pem (1082 bytes)
	I0127 11:54:41.330255  434272 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-300538/.minikube/certs/cert.pem (1123 bytes)
	I0127 11:54:41.330281  434272 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-300538/.minikube/certs/key.pem (1679 bytes)
	I0127 11:54:41.330329  434272 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-300538/.minikube/files/etc/ssl/certs/3059362.pem (1708 bytes)
	I0127 11:54:41.330972  434272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 11:54:41.355350  434272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0127 11:54:41.379600  434272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 11:54:41.404139  434272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0127 11:54:41.428380  434272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/test-preload-250630/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0127 11:54:41.452078  434272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/test-preload-250630/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 11:54:41.475686  434272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/test-preload-250630/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 11:54:41.500365  434272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/test-preload-250630/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 11:54:41.524473  434272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/files/etc/ssl/certs/3059362.pem --> /usr/share/ca-certificates/3059362.pem (1708 bytes)
	I0127 11:54:41.548901  434272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 11:54:41.574316  434272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/certs/305936.pem --> /usr/share/ca-certificates/305936.pem (1338 bytes)
	I0127 11:54:41.599982  434272 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 11:54:41.619210  434272 ssh_runner.go:195] Run: openssl version
	I0127 11:54:41.625010  434272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 11:54:41.635010  434272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:54:41.639500  434272 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 11:18 /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:54:41.639573  434272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:54:41.647899  434272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 11:54:41.657994  434272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/305936.pem && ln -fs /usr/share/ca-certificates/305936.pem /etc/ssl/certs/305936.pem"
	I0127 11:54:41.667826  434272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/305936.pem
	I0127 11:54:41.672539  434272 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 11:26 /usr/share/ca-certificates/305936.pem
	I0127 11:54:41.672654  434272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/305936.pem
	I0127 11:54:41.680316  434272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/305936.pem /etc/ssl/certs/51391683.0"
	I0127 11:54:41.692638  434272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3059362.pem && ln -fs /usr/share/ca-certificates/3059362.pem /etc/ssl/certs/3059362.pem"
	I0127 11:54:41.708379  434272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3059362.pem
	I0127 11:54:41.716668  434272 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 11:26 /usr/share/ca-certificates/3059362.pem
	I0127 11:54:41.716789  434272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3059362.pem
	I0127 11:54:41.728041  434272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3059362.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 11:54:41.738692  434272 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 11:54:41.742444  434272 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0127 11:54:41.742523  434272 kubeadm.go:392] StartCluster: {Name:test-preload-250630 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-250630 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Socket
VMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:54:41.742617  434272 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 11:54:41.742679  434272 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 11:54:41.782253  434272 cri.go:89] found id: ""
	I0127 11:54:41.782352  434272 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 11:54:41.791538  434272 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 11:54:41.800621  434272 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0127 11:54:41.800709  434272 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 11:54:41.809954  434272 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 11:54:41.809979  434272 kubeadm.go:157] found existing configuration files:
	
	I0127 11:54:41.810046  434272 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 11:54:41.819774  434272 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 11:54:41.819852  434272 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 11:54:41.829107  434272 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 11:54:41.838684  434272 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 11:54:41.838784  434272 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 11:54:41.847890  434272 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 11:54:41.857084  434272 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 11:54:41.857184  434272 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 11:54:41.866112  434272 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 11:54:41.875189  434272 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 11:54:41.875279  434272 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 11:54:41.884287  434272 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0127 11:54:41.931651  434272 kubeadm.go:310] [init] Using Kubernetes version: v1.24.4
	I0127 11:54:41.931927  434272 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 11:54:41.977546  434272 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0127 11:54:41.977657  434272 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1075-aws
	I0127 11:54:41.977730  434272 kubeadm.go:310] OS: Linux
	I0127 11:54:41.977800  434272 kubeadm.go:310] CGROUPS_CPU: enabled
	I0127 11:54:41.977862  434272 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0127 11:54:41.977918  434272 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0127 11:54:41.977971  434272 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0127 11:54:41.978023  434272 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0127 11:54:41.978130  434272 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0127 11:54:41.978185  434272 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0127 11:54:41.978239  434272 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0127 11:54:41.978290  434272 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0127 11:54:42.071719  434272 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 11:54:42.071922  434272 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 11:54:42.072045  434272 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 11:55:02.148362  434272 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 11:55:02.152044  434272 out.go:235]   - Generating certificates and keys ...
	I0127 11:55:02.152153  434272 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 11:55:02.152217  434272 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 11:55:02.776352  434272 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0127 11:55:03.490975  434272 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0127 11:55:03.928694  434272 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0127 11:55:04.196924  434272 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0127 11:55:04.413738  434272 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0127 11:55:04.414326  434272 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost test-preload-250630] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0127 11:55:04.608556  434272 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0127 11:55:04.608863  434272 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost test-preload-250630] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0127 11:55:05.154046  434272 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0127 11:55:05.462373  434272 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0127 11:55:05.667307  434272 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0127 11:55:05.667610  434272 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 11:55:05.875538  434272 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 11:55:06.547503  434272 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 11:55:06.992052  434272 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 11:55:07.940556  434272 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 11:55:08.027947  434272 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 11:55:08.028924  434272 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 11:55:08.029166  434272 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 11:55:08.131482  434272 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 11:55:08.135034  434272 out.go:235]   - Booting up control plane ...
	I0127 11:55:08.135163  434272 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 11:55:08.135241  434272 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 11:55:08.135317  434272 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 11:55:08.135728  434272 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 11:55:08.138325  434272 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 11:55:16.640084  434272 kubeadm.go:310] [apiclient] All control plane components are healthy after 8.502150 seconds
	I0127 11:55:16.640204  434272 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 11:55:16.654014  434272 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 11:55:17.175662  434272 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 11:55:17.175879  434272 kubeadm.go:310] [mark-control-plane] Marking the node test-preload-250630 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 11:55:17.686113  434272 kubeadm.go:310] [bootstrap-token] Using token: j4uj1s.qtvzwjfj9l0zqgva
	I0127 11:55:17.690486  434272 out.go:235]   - Configuring RBAC rules ...
	I0127 11:55:17.690619  434272 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 11:55:17.693818  434272 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 11:55:17.699537  434272 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 11:55:17.702234  434272 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 11:55:17.705044  434272 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 11:55:17.707251  434272 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 11:55:17.716828  434272 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 11:55:17.928251  434272 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 11:55:18.098669  434272 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 11:55:18.098691  434272 kubeadm.go:310] 
	I0127 11:55:18.098774  434272 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 11:55:18.098795  434272 kubeadm.go:310] 
	I0127 11:55:18.098902  434272 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 11:55:18.098912  434272 kubeadm.go:310] 
	I0127 11:55:18.098948  434272 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 11:55:18.099009  434272 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 11:55:18.099068  434272 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 11:55:18.099075  434272 kubeadm.go:310] 
	I0127 11:55:18.099129  434272 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 11:55:18.099134  434272 kubeadm.go:310] 
	I0127 11:55:18.099181  434272 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 11:55:18.099186  434272 kubeadm.go:310] 
	I0127 11:55:18.099237  434272 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 11:55:18.099345  434272 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 11:55:18.099425  434272 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 11:55:18.099432  434272 kubeadm.go:310] 
	I0127 11:55:18.099540  434272 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 11:55:18.099620  434272 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 11:55:18.099625  434272 kubeadm.go:310] 
	I0127 11:55:18.099756  434272 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token j4uj1s.qtvzwjfj9l0zqgva \
	I0127 11:55:18.099873  434272 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:48e67245c146e975d09f55afb746c1ac0255b920d803cd458fd330de66b03567 \
	I0127 11:55:18.099896  434272 kubeadm.go:310] 	--control-plane 
	I0127 11:55:18.099900  434272 kubeadm.go:310] 
	I0127 11:55:18.099985  434272 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 11:55:18.099989  434272 kubeadm.go:310] 
	I0127 11:55:18.100073  434272 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token j4uj1s.qtvzwjfj9l0zqgva \
	I0127 11:55:18.100175  434272 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:48e67245c146e975d09f55afb746c1ac0255b920d803cd458fd330de66b03567 
	I0127 11:55:18.107771  434272 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1075-aws\n", err: exit status 1
	I0127 11:55:18.107898  434272 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 11:55:18.107914  434272 cni.go:84] Creating CNI manager for ""
	I0127 11:55:18.107923  434272 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0127 11:55:18.111673  434272 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0127 11:55:18.114696  434272 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0127 11:55:18.126583  434272 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.24.4/kubectl ...
	I0127 11:55:18.126607  434272 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0127 11:55:18.165752  434272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0127 11:55:19.367445  434272 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.24.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.20165396s)
	I0127 11:55:19.367490  434272 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 11:55:19.367606  434272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:55:19.367686  434272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes test-preload-250630 minikube.k8s.io/updated_at=2025_01_27T11_55_19_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=35c230aa12d4986001aef5f6e29069f3bc5493aa minikube.k8s.io/name=test-preload-250630 minikube.k8s.io/primary=true
	I0127 11:55:19.496263  434272 ops.go:34] apiserver oom_adj: -16
	I0127 11:55:19.496358  434272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:55:19.996412  434272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:55:20.496713  434272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:55:20.996645  434272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:55:21.496490  434272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:55:21.997244  434272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:55:22.497183  434272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:55:22.997151  434272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:55:23.497378  434272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:55:23.997081  434272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:55:24.496638  434272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:55:24.996944  434272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:55:25.497462  434272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:55:25.996573  434272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:55:26.496463  434272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:55:26.997212  434272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:55:27.496596  434272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:55:27.996465  434272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:55:28.497070  434272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:55:28.996934  434272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:55:29.496941  434272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:55:29.996502  434272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:55:30.496400  434272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:55:30.997123  434272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:55:31.112499  434272 kubeadm.go:1113] duration metric: took 11.744940046s to wait for elevateKubeSystemPrivileges
	I0127 11:55:31.112528  434272 kubeadm.go:394] duration metric: took 49.370009314s to StartCluster
	I0127 11:55:31.112545  434272 settings.go:142] acquiring lock: {Name:mk59e26dfc61a439e501d9ae8e7cbc4a6f05e310 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:55:31.112608  434272 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20319-300538/kubeconfig
	I0127 11:55:31.113326  434272 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-300538/kubeconfig: {Name:mka2258aa0d8dec49c19d97bc831e58d42b19053 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:55:31.113540  434272 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 11:55:31.113633  434272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0127 11:55:31.113878  434272 config.go:182] Loaded profile config "test-preload-250630": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0127 11:55:31.113918  434272 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 11:55:31.114012  434272 addons.go:69] Setting storage-provisioner=true in profile "test-preload-250630"
	I0127 11:55:31.114023  434272 addons.go:69] Setting default-storageclass=true in profile "test-preload-250630"
	I0127 11:55:31.114054  434272 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-250630"
	I0127 11:55:31.114027  434272 addons.go:238] Setting addon storage-provisioner=true in "test-preload-250630"
	I0127 11:55:31.114157  434272 host.go:66] Checking if "test-preload-250630" exists ...
	I0127 11:55:31.114430  434272 cli_runner.go:164] Run: docker container inspect test-preload-250630 --format={{.State.Status}}
	I0127 11:55:31.114614  434272 cli_runner.go:164] Run: docker container inspect test-preload-250630 --format={{.State.Status}}
	I0127 11:55:31.116755  434272 out.go:177] * Verifying Kubernetes components...
	I0127 11:55:31.119997  434272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:55:31.157026  434272 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:55:31.160027  434272 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 11:55:31.160049  434272 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 11:55:31.160124  434272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-250630
	I0127 11:55:31.165317  434272 addons.go:238] Setting addon default-storageclass=true in "test-preload-250630"
	I0127 11:55:31.165360  434272 host.go:66] Checking if "test-preload-250630" exists ...
	I0127 11:55:31.165779  434272 cli_runner.go:164] Run: docker container inspect test-preload-250630 --format={{.State.Status}}
	I0127 11:55:31.194560  434272 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 11:55:31.194581  434272 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 11:55:31.194644  434272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-250630
	I0127 11:55:31.204828  434272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33323 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/test-preload-250630/id_rsa Username:docker}
	I0127 11:55:31.230978  434272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33323 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/test-preload-250630/id_rsa Username:docker}
	I0127 11:55:31.361764  434272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.24.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0127 11:55:31.376107  434272 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 11:55:31.427275  434272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 11:55:31.472224  434272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 11:55:31.873304  434272 start.go:971] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I0127 11:55:31.875220  434272 node_ready.go:35] waiting up to 6m0s for node "test-preload-250630" to be "Ready" ...
	W0127 11:55:31.951094  434272 kapi.go:211] failed rescaling "coredns" deployment in "kube-system" namespace and "test-preload-250630" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E0127 11:55:31.951157  434272 start.go:160] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0127 11:55:32.026299  434272 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0127 11:55:32.029148  434272 addons.go:514] duration metric: took 915.229099ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0127 11:55:33.879200  434272 node_ready.go:53] node "test-preload-250630" has status "Ready":"False"
	I0127 11:55:36.378763  434272 node_ready.go:53] node "test-preload-250630" has status "Ready":"False"
	I0127 11:55:38.878636  434272 node_ready.go:53] node "test-preload-250630" has status "Ready":"False"
	I0127 11:55:40.879345  434272 node_ready.go:53] node "test-preload-250630" has status "Ready":"False"
	I0127 11:55:43.379380  434272 node_ready.go:53] node "test-preload-250630" has status "Ready":"False"
	I0127 11:55:45.379990  434272 node_ready.go:53] node "test-preload-250630" has status "Ready":"False"
	I0127 11:55:47.879520  434272 node_ready.go:53] node "test-preload-250630" has status "Ready":"False"
	I0127 11:55:48.878550  434272 node_ready.go:49] node "test-preload-250630" has status "Ready":"True"
	I0127 11:55:48.878573  434272 node_ready.go:38] duration metric: took 17.00332387s for node "test-preload-250630" to be "Ready" ...
	I0127 11:55:48.878585  434272 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:55:48.889736  434272 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-hgtpg" in "kube-system" namespace to be "Ready" ...
	I0127 11:55:50.896505  434272 pod_ready.go:93] pod "coredns-6d4b75cb6d-hgtpg" in "kube-system" namespace has status "Ready":"True"
	I0127 11:55:50.896534  434272 pod_ready.go:82] duration metric: took 2.006709037s for pod "coredns-6d4b75cb6d-hgtpg" in "kube-system" namespace to be "Ready" ...
	I0127 11:55:50.896547  434272 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-zg4sc" in "kube-system" namespace to be "Ready" ...
	I0127 11:55:50.902649  434272 pod_ready.go:93] pod "coredns-6d4b75cb6d-zg4sc" in "kube-system" namespace has status "Ready":"True"
	I0127 11:55:50.902674  434272 pod_ready.go:82] duration metric: took 6.118886ms for pod "coredns-6d4b75cb6d-zg4sc" in "kube-system" namespace to be "Ready" ...
	I0127 11:55:50.902685  434272 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-250630" in "kube-system" namespace to be "Ready" ...
	I0127 11:55:50.908847  434272 pod_ready.go:93] pod "etcd-test-preload-250630" in "kube-system" namespace has status "Ready":"True"
	I0127 11:55:50.908877  434272 pod_ready.go:82] duration metric: took 6.183428ms for pod "etcd-test-preload-250630" in "kube-system" namespace to be "Ready" ...
	I0127 11:55:50.908893  434272 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-250630" in "kube-system" namespace to be "Ready" ...
	I0127 11:55:50.914745  434272 pod_ready.go:93] pod "kube-apiserver-test-preload-250630" in "kube-system" namespace has status "Ready":"True"
	I0127 11:55:50.914771  434272 pod_ready.go:82] duration metric: took 5.843606ms for pod "kube-apiserver-test-preload-250630" in "kube-system" namespace to be "Ready" ...
	I0127 11:55:50.914784  434272 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-250630" in "kube-system" namespace to be "Ready" ...
	I0127 11:55:50.920254  434272 pod_ready.go:93] pod "kube-controller-manager-test-preload-250630" in "kube-system" namespace has status "Ready":"True"
	I0127 11:55:50.920281  434272 pod_ready.go:82] duration metric: took 5.487406ms for pod "kube-controller-manager-test-preload-250630" in "kube-system" namespace to be "Ready" ...
	I0127 11:55:50.920293  434272 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fkkqm" in "kube-system" namespace to be "Ready" ...
	I0127 11:55:51.294015  434272 pod_ready.go:93] pod "kube-proxy-fkkqm" in "kube-system" namespace has status "Ready":"True"
	I0127 11:55:51.294043  434272 pod_ready.go:82] duration metric: took 373.723183ms for pod "kube-proxy-fkkqm" in "kube-system" namespace to be "Ready" ...
	I0127 11:55:51.294055  434272 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-250630" in "kube-system" namespace to be "Ready" ...
	I0127 11:55:51.694054  434272 pod_ready.go:93] pod "kube-scheduler-test-preload-250630" in "kube-system" namespace has status "Ready":"True"
	I0127 11:55:51.694081  434272 pod_ready.go:82] duration metric: took 400.017907ms for pod "kube-scheduler-test-preload-250630" in "kube-system" namespace to be "Ready" ...
	I0127 11:55:51.694095  434272 pod_ready.go:39] duration metric: took 2.815486217s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:55:51.694133  434272 api_server.go:52] waiting for apiserver process to appear ...
	I0127 11:55:51.694208  434272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:55:51.705685  434272 api_server.go:72] duration metric: took 20.592115465s to wait for apiserver process to appear ...
	I0127 11:55:51.705719  434272 api_server.go:88] waiting for apiserver healthz status ...
	I0127 11:55:51.705757  434272 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0127 11:55:51.714317  434272 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0127 11:55:51.715297  434272 api_server.go:141] control plane version: v1.24.4
	I0127 11:55:51.715326  434272 api_server.go:131] duration metric: took 9.600253ms to wait for apiserver health ...
	I0127 11:55:51.715336  434272 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 11:55:51.897586  434272 system_pods.go:59] 9 kube-system pods found
	I0127 11:55:51.897623  434272 system_pods.go:61] "coredns-6d4b75cb6d-hgtpg" [3f3183bc-8ab4-4d96-af53-11e6d4a92b33] Running
	I0127 11:55:51.897630  434272 system_pods.go:61] "coredns-6d4b75cb6d-zg4sc" [089aa46b-565c-4c28-ab5c-ee8612cbd71e] Running
	I0127 11:55:51.897635  434272 system_pods.go:61] "etcd-test-preload-250630" [6b3aff6f-215c-49c0-9348-e8641959e130] Running
	I0127 11:55:51.897640  434272 system_pods.go:61] "kindnet-rljhx" [c82c2d68-4fa6-4bc5-8977-4307e520134d] Running
	I0127 11:55:51.897644  434272 system_pods.go:61] "kube-apiserver-test-preload-250630" [20bcd548-8694-43be-8904-1aab8d64581f] Running
	I0127 11:55:51.897649  434272 system_pods.go:61] "kube-controller-manager-test-preload-250630" [8cda4b66-c9e0-4f60-8f09-e1c0b4b15aa4] Running
	I0127 11:55:51.897653  434272 system_pods.go:61] "kube-proxy-fkkqm" [d22937ef-3dbc-44b6-8694-bb29ffede6a1] Running
	I0127 11:55:51.897657  434272 system_pods.go:61] "kube-scheduler-test-preload-250630" [702bfcac-8519-4bd7-a5ce-627392f3a087] Running
	I0127 11:55:51.897666  434272 system_pods.go:61] "storage-provisioner" [3bf73def-3502-4824-b94e-3272ddc86c8e] Running
	I0127 11:55:51.897677  434272 system_pods.go:74] duration metric: took 182.335704ms to wait for pod list to return data ...
	I0127 11:55:51.897688  434272 default_sa.go:34] waiting for default service account to be created ...
	I0127 11:55:52.093707  434272 default_sa.go:45] found service account: "default"
	I0127 11:55:52.093738  434272 default_sa.go:55] duration metric: took 196.043167ms for default service account to be created ...
	I0127 11:55:52.093750  434272 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 11:55:52.297703  434272 system_pods.go:87] 9 kube-system pods found
	
	
	==> CRI-O <==
	Jan 27 11:55:49 test-preload-250630 crio[983]: time="2025-01-27 11:55:49.031668967Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/accf2f4b81aee703d8c2508eb5e58515a8e555e65ffda314ff3d46276bd16054/merged/etc/passwd: no such file or directory"
	Jan 27 11:55:49 test-preload-250630 crio[983]: time="2025-01-27 11:55:49.031728660Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/accf2f4b81aee703d8c2508eb5e58515a8e555e65ffda314ff3d46276bd16054/merged/etc/group: no such file or directory"
	Jan 27 11:55:49 test-preload-250630 crio[983]: time="2025-01-27 11:55:49.068029632Z" level=info msg="Created container 8d0d2d6cfdc8198a7e6c4995a4ff8f3213d56176303f03e6bd2b837103889803: kube-system/coredns-6d4b75cb6d-zg4sc/coredns" id=d4b1dd4a-0be6-4f88-a331-4b9504efc213 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 27 11:55:49 test-preload-250630 crio[983]: time="2025-01-27 11:55:49.068751353Z" level=info msg="Starting container: 8d0d2d6cfdc8198a7e6c4995a4ff8f3213d56176303f03e6bd2b837103889803" id=897e79e0-d858-4347-a001-581b1fa6da9f name=/runtime.v1.RuntimeService/StartContainer
	Jan 27 11:55:49 test-preload-250630 crio[983]: time="2025-01-27 11:55:49.077917080Z" level=info msg="Started container" PID=2932 containerID=8d0d2d6cfdc8198a7e6c4995a4ff8f3213d56176303f03e6bd2b837103889803 description=kube-system/coredns-6d4b75cb6d-zg4sc/coredns id=897e79e0-d858-4347-a001-581b1fa6da9f name=/runtime.v1.RuntimeService/StartContainer sandboxID=b10a82bfa6ff7019e6b1d1b2f5d24ac9bd1bc5e0b1dfcb898728c01d5f361c29
	Jan 27 11:55:49 test-preload-250630 crio[983]: time="2025-01-27 11:55:49.108516215Z" level=info msg="Created container 9263a995f215adf6673c40db4908e448f79905d79cdaf0a7f1572ba8e83e7cd2: kube-system/coredns-6d4b75cb6d-hgtpg/coredns" id=25126846-0ed4-4001-a4b8-bcfc122a74ef name=/runtime.v1.RuntimeService/CreateContainer
	Jan 27 11:55:49 test-preload-250630 crio[983]: time="2025-01-27 11:55:49.109061263Z" level=info msg="Starting container: 9263a995f215adf6673c40db4908e448f79905d79cdaf0a7f1572ba8e83e7cd2" id=1fd24df0-de04-4901-895d-2f4a579eb920 name=/runtime.v1.RuntimeService/StartContainer
	Jan 27 11:55:49 test-preload-250630 crio[983]: time="2025-01-27 11:55:49.118281874Z" level=info msg="Created container 61928e9a0002e44b6cc17ec36f7b26270e9f8e0b6e5ce6418853599d365c629a: kube-system/storage-provisioner/storage-provisioner" id=c57567ad-0df9-4806-ad81-5967a0e103c5 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 27 11:55:49 test-preload-250630 crio[983]: time="2025-01-27 11:55:49.118803603Z" level=info msg="Starting container: 61928e9a0002e44b6cc17ec36f7b26270e9f8e0b6e5ce6418853599d365c629a" id=88eff2a8-a8a6-4108-848d-317161490f87 name=/runtime.v1.RuntimeService/StartContainer
	Jan 27 11:55:49 test-preload-250630 crio[983]: time="2025-01-27 11:55:49.123115555Z" level=info msg="Started container" PID=2964 containerID=9263a995f215adf6673c40db4908e448f79905d79cdaf0a7f1572ba8e83e7cd2 description=kube-system/coredns-6d4b75cb6d-hgtpg/coredns id=1fd24df0-de04-4901-895d-2f4a579eb920 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0f7c590ca60c675e63e079df5e8cf885995f86f2ebadf941e842850aaec584e6
	Jan 27 11:55:49 test-preload-250630 crio[983]: time="2025-01-27 11:55:49.138203000Z" level=info msg="Started container" PID=2940 containerID=61928e9a0002e44b6cc17ec36f7b26270e9f8e0b6e5ce6418853599d365c629a description=kube-system/storage-provisioner/storage-provisioner id=88eff2a8-a8a6-4108-848d-317161490f87 name=/runtime.v1.RuntimeService/StartContainer sandboxID=16400edc20d490f11d0966f03262fb6f9b331b058cd43b80ef3e80930dd35067
	Jan 27 12:00:18 test-preload-250630 crio[983]: time="2025-01-27 12:00:18.168522898Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.7" id=399c939e-9cf6-4152-bbce-5056b32af1a2 name=/runtime.v1.ImageService/ImageStatus
	Jan 27 12:00:18 test-preload-250630 crio[983]: time="2025-01-27 12:00:18.168775204Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550,RepoTags:[k8s.gcr.io/pause:3.7 registry.k8s.io/pause:3.7],RepoDigests:[k8s.gcr.io/pause@sha256:740ebc62b6f592c085d4c2b44fb2c65b72e64745be195ea83cdaeb682aa2903f k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c k8s.gcr.io/pause@sha256:daff8f62d8a9446b88b0806d7f2bece15c5182b4bd9a327597ecde60cf19efb1 registry.k8s.io/pause@sha256:740ebc62b6f592c085d4c2b44fb2c65b72e64745be195ea83cdaeb682aa2903f registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause@sha256:daff8f62d8a9446b88b0806d7f2bece15c5182b4bd9a327597ecde60cf19efb1],Size_:520791,Uid:&Int64Value{Value:65535,},Username:,Spec:nil,},Info:map[string]string{},}" id=399c939e-9cf6-4152-bbce-5056b32af1a2 name=/runtime.v1.ImageService/ImageStatus
	Jan 27 12:05:18 test-preload-250630 crio[983]: time="2025-01-27 12:05:18.171861239Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.7" id=41c7a9bb-065b-4261-9359-4304ead17229 name=/runtime.v1.ImageService/ImageStatus
	Jan 27 12:05:18 test-preload-250630 crio[983]: time="2025-01-27 12:05:18.172116565Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550,RepoTags:[k8s.gcr.io/pause:3.7 registry.k8s.io/pause:3.7],RepoDigests:[k8s.gcr.io/pause@sha256:740ebc62b6f592c085d4c2b44fb2c65b72e64745be195ea83cdaeb682aa2903f k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c k8s.gcr.io/pause@sha256:daff8f62d8a9446b88b0806d7f2bece15c5182b4bd9a327597ecde60cf19efb1 registry.k8s.io/pause@sha256:740ebc62b6f592c085d4c2b44fb2c65b72e64745be195ea83cdaeb682aa2903f registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause@sha256:daff8f62d8a9446b88b0806d7f2bece15c5182b4bd9a327597ecde60cf19efb1],Size_:520791,Uid:&Int64Value{Value:65535,},Username:,Spec:nil,},Info:map[string]string{},}" id=41c7a9bb-065b-4261-9359-4304ead17229 name=/runtime.v1.ImageService/ImageStatus
	Jan 27 12:10:18 test-preload-250630 crio[983]: time="2025-01-27 12:10:18.174977733Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.7" id=4a271eee-fae6-4e07-9667-e64efdfa14d1 name=/runtime.v1.ImageService/ImageStatus
	Jan 27 12:10:18 test-preload-250630 crio[983]: time="2025-01-27 12:10:18.175249583Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550,RepoTags:[k8s.gcr.io/pause:3.7 registry.k8s.io/pause:3.7],RepoDigests:[k8s.gcr.io/pause@sha256:740ebc62b6f592c085d4c2b44fb2c65b72e64745be195ea83cdaeb682aa2903f k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c k8s.gcr.io/pause@sha256:daff8f62d8a9446b88b0806d7f2bece15c5182b4bd9a327597ecde60cf19efb1 registry.k8s.io/pause@sha256:740ebc62b6f592c085d4c2b44fb2c65b72e64745be195ea83cdaeb682aa2903f registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause@sha256:daff8f62d8a9446b88b0806d7f2bece15c5182b4bd9a327597ecde60cf19efb1],Size_:520791,Uid:&Int64Value{Value:65535,},Username:,Spec:nil,},Info:map[string]string{},}" id=4a271eee-fae6-4e07-9667-e64efdfa14d1 name=/runtime.v1.ImageService/ImageStatus
	Jan 27 12:15:18 test-preload-250630 crio[983]: time="2025-01-27 12:15:18.177794716Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.7" id=3ad8f969-0e7d-4ffa-838e-2b8fea6cff61 name=/runtime.v1.ImageService/ImageStatus
	Jan 27 12:15:18 test-preload-250630 crio[983]: time="2025-01-27 12:15:18.178061849Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550,RepoTags:[k8s.gcr.io/pause:3.7 registry.k8s.io/pause:3.7],RepoDigests:[k8s.gcr.io/pause@sha256:740ebc62b6f592c085d4c2b44fb2c65b72e64745be195ea83cdaeb682aa2903f k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c k8s.gcr.io/pause@sha256:daff8f62d8a9446b88b0806d7f2bece15c5182b4bd9a327597ecde60cf19efb1 registry.k8s.io/pause@sha256:740ebc62b6f592c085d4c2b44fb2c65b72e64745be195ea83cdaeb682aa2903f registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause@sha256:daff8f62d8a9446b88b0806d7f2bece15c5182b4bd9a327597ecde60cf19efb1],Size_:520791,Uid:&Int64Value{Value:65535,},Username:,Spec:nil,},Info:map[string]string{},}" id=3ad8f969-0e7d-4ffa-838e-2b8fea6cff61 name=/runtime.v1.ImageService/ImageStatus
	Jan 27 12:20:18 test-preload-250630 crio[983]: time="2025-01-27 12:20:18.181024553Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.7" id=58eb8685-daeb-4725-874a-26100850aa77 name=/runtime.v1.ImageService/ImageStatus
	Jan 27 12:20:18 test-preload-250630 crio[983]: time="2025-01-27 12:20:18.181271682Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550,RepoTags:[k8s.gcr.io/pause:3.7 registry.k8s.io/pause:3.7],RepoDigests:[k8s.gcr.io/pause@sha256:740ebc62b6f592c085d4c2b44fb2c65b72e64745be195ea83cdaeb682aa2903f k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c k8s.gcr.io/pause@sha256:daff8f62d8a9446b88b0806d7f2bece15c5182b4bd9a327597ecde60cf19efb1 registry.k8s.io/pause@sha256:740ebc62b6f592c085d4c2b44fb2c65b72e64745be195ea83cdaeb682aa2903f registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause@sha256:daff8f62d8a9446b88b0806d7f2bece15c5182b4bd9a327597ecde60cf19efb1],Size_:520791,Uid:&Int64Value{Value:65535,},Username:,Spec:nil,},Info:map[string]string{},}" id=58eb8685-daeb-4725-874a-26100850aa77 name=/runtime.v1.ImageService/ImageStatus
	Jan 27 12:25:18 test-preload-250630 crio[983]: time="2025-01-27 12:25:18.184165539Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.7" id=c5df4040-8687-4cef-8e70-2519411a65ad name=/runtime.v1.ImageService/ImageStatus
	Jan 27 12:25:18 test-preload-250630 crio[983]: time="2025-01-27 12:25:18.184419757Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550,RepoTags:[k8s.gcr.io/pause:3.7 registry.k8s.io/pause:3.7],RepoDigests:[k8s.gcr.io/pause@sha256:740ebc62b6f592c085d4c2b44fb2c65b72e64745be195ea83cdaeb682aa2903f k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c k8s.gcr.io/pause@sha256:daff8f62d8a9446b88b0806d7f2bece15c5182b4bd9a327597ecde60cf19efb1 registry.k8s.io/pause@sha256:740ebc62b6f592c085d4c2b44fb2c65b72e64745be195ea83cdaeb682aa2903f registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause@sha256:daff8f62d8a9446b88b0806d7f2bece15c5182b4bd9a327597ecde60cf19efb1],Size_:520791,Uid:&Int64Value{Value:65535,},Username:,Spec:nil,},Info:map[string]string{},}" id=c5df4040-8687-4cef-8e70-2519411a65ad name=/runtime.v1.ImageService/ImageStatus
	Jan 27 12:30:18 test-preload-250630 crio[983]: time="2025-01-27 12:30:18.188035017Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.7" id=eced989e-ba0b-4af3-9e0a-ae7f81b8e672 name=/runtime.v1.ImageService/ImageStatus
	Jan 27 12:30:18 test-preload-250630 crio[983]: time="2025-01-27 12:30:18.188300517Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550,RepoTags:[k8s.gcr.io/pause:3.7 registry.k8s.io/pause:3.7],RepoDigests:[k8s.gcr.io/pause@sha256:740ebc62b6f592c085d4c2b44fb2c65b72e64745be195ea83cdaeb682aa2903f k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c k8s.gcr.io/pause@sha256:daff8f62d8a9446b88b0806d7f2bece15c5182b4bd9a327597ecde60cf19efb1 registry.k8s.io/pause@sha256:740ebc62b6f592c085d4c2b44fb2c65b72e64745be195ea83cdaeb682aa2903f registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause@sha256:daff8f62d8a9446b88b0806d7f2bece15c5182b4bd9a327597ecde60cf19efb1],Size_:520791,Uid:&Int64Value{Value:65535,},Username:,Spec:nil,},Info:map[string]string{},}" id=eced989e-ba0b-4af3-9e0a-ae7f81b8e672 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9263a995f215a       edaa71f2aee883484133da046954ad70fd6bf1fa42e5aec3f7dae199c626299c                                     38 minutes ago      Running             coredns                   0                   0f7c590ca60c6       coredns-6d4b75cb6d-hgtpg
	61928e9a0002e       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                     38 minutes ago      Running             storage-provisioner       0                   16400edc20d49       storage-provisioner
	8d0d2d6cfdc81       edaa71f2aee883484133da046954ad70fd6bf1fa42e5aec3f7dae199c626299c                                     38 minutes ago      Running             coredns                   0                   b10a82bfa6ff7       coredns-6d4b75cb6d-zg4sc
	4f1394d480e55       docker.io/kindest/kindnetd@sha256:de216f6245e142905c8022d424959a65f798fcd26f5b7492b9c0b0391d735c3e   38 minutes ago      Running             kindnet-cni               0                   3a6cc8c8c25a7       kindnet-rljhx
	14e1b64f410c7       bd8cc6d58247078a865774b7f516f8afc3ac8cd080fd49650ca30ef2fbc6ebd1                                     38 minutes ago      Running             kube-proxy                0                   5234f8b212d18       kube-proxy-fkkqm
	4d614a2eba620       5753e4610b3ec0ac100c3535b8d8a7507b3d031148e168c2c3c4b0f389976074                                     39 minutes ago      Running             kube-scheduler            0                   6dddc4ec2a142       kube-scheduler-test-preload-250630
	0787e7a1c51c2       81a4a8a4ac639bdd7e118359417a80cab1a0d0e4737eb735714cf7f8b15dc0c7                                     39 minutes ago      Running             kube-controller-manager   0                   cf1eb4223ea90       kube-controller-manager-test-preload-250630
	59df883a4a000       3767741e7fba72f328a8500a18ef34481343eb78697e31ae5bf3e390a28317ae                                     39 minutes ago      Running             kube-apiserver            0                   25ea3526ec70c       kube-apiserver-test-preload-250630
	b6ebc758d6d0d       a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a                                     39 minutes ago      Running             etcd                      0                   307f070dd342d       etcd-test-preload-250630
	
	
	==> coredns [8d0d2d6cfdc8198a7e6c4995a4ff8f3213d56176303f03e6bd2b837103889803] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:43952 - 53864 "HINFO IN 6305419154265674688.4150250388363245629. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022662597s
	
	
	==> coredns [9263a995f215adf6673c40db4908e448f79905d79cdaf0a7f1572ba8e83e7cd2] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:59347 - 42277 "HINFO IN 7717842392932372144.5800915289407359883. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008133378s
	
	
	==> describe nodes <==
	Name:               test-preload-250630
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=test-preload-250630
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=35c230aa12d4986001aef5f6e29069f3bc5493aa
	                    minikube.k8s.io/name=test-preload-250630
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T11_55_19_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 11:55:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-250630
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 12:34:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 12:31:32 +0000   Mon, 27 Jan 2025 11:55:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 12:31:32 +0000   Mon, 27 Jan 2025 11:55:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 12:31:32 +0000   Mon, 27 Jan 2025 11:55:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 12:31:32 +0000   Mon, 27 Jan 2025 11:55:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    test-preload-250630
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 43e1815534704eef96a378d271f2de59
	  System UUID:                e9eac13d-40d2-49b9-8e06-ad6ac6cda3ff
	  Boot ID:                    dd59411c-5b67-4eb9-9e59-86d920ad153c
	  Kernel Version:             5.15.0-1075-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-hgtpg                       100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     38m
	  kube-system                 coredns-6d4b75cb6d-zg4sc                       100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     38m
	  kube-system                 etcd-test-preload-250630                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         38m
	  kube-system                 kindnet-rljhx                                  100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      38m
	  kube-system                 kube-apiserver-test-preload-250630             250m (12%)    0 (0%)      0 (0%)           0 (0%)         38m
	  kube-system                 kube-controller-manager-test-preload-250630    200m (10%)    0 (0%)      0 (0%)           0 (0%)         38m
	  kube-system                 kube-proxy-fkkqm                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         38m
	  kube-system                 kube-scheduler-test-preload-250630             100m (5%)     0 (0%)      0 (0%)           0 (0%)         38m
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         38m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 38m                kube-proxy       
	  Normal  NodeHasSufficientMemory  39m (x5 over 39m)  kubelet          Node test-preload-250630 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     39m (x4 over 39m)  kubelet          Node test-preload-250630 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    39m (x5 over 39m)  kubelet          Node test-preload-250630 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasNoDiskPressure    38m                kubelet          Node test-preload-250630 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 38m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  38m                kubelet          Node test-preload-250630 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     38m                kubelet          Node test-preload-250630 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           38m                node-controller  Node test-preload-250630 event: Registered Node test-preload-250630 in Controller
	  Normal  NodeReady                38m                kubelet          Node test-preload-250630 status is now: NodeReady
	
	
	==> dmesg <==
	[Jan27 10:45] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [b6ebc758d6d0debbfc84cd5e989d558767ea6593ea00f3aed7e4645c36e5915f] <==
	{"level":"info","ts":"2025-01-27T11:55:10.179Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-01-27T11:55:10.179Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-01-27T11:55:10.179Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:test-preload-250630 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-01-27T11:55:10.179Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-27T11:55:10.181Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-01-27T11:55:10.181Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T11:55:10.181Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-01-27T11:55:10.199Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-01-27T11:55:10.203Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T11:55:10.203Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T11:55:10.203Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T11:55:10.181Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-27T11:55:10.204Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-01-27T12:05:11.280Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":613}
	{"level":"info","ts":"2025-01-27T12:05:11.281Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":613,"took":"654.915µs"}
	{"level":"info","ts":"2025-01-27T12:10:11.285Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":823}
	{"level":"info","ts":"2025-01-27T12:10:11.286Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":823,"took":"628.828µs"}
	{"level":"info","ts":"2025-01-27T12:15:11.290Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1034}
	{"level":"info","ts":"2025-01-27T12:15:11.291Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1034,"took":"524.411µs"}
	{"level":"info","ts":"2025-01-27T12:20:11.295Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1244}
	{"level":"info","ts":"2025-01-27T12:20:11.296Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1244,"took":"499.501µs"}
	{"level":"info","ts":"2025-01-27T12:25:11.301Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1455}
	{"level":"info","ts":"2025-01-27T12:25:11.301Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1455,"took":"671.053µs"}
	{"level":"info","ts":"2025-01-27T12:30:11.305Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1665}
	{"level":"info","ts":"2025-01-27T12:30:11.305Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1665,"took":"515.436µs"}
	
	
	==> kernel <==
	 12:34:15 up  3:16,  0 users,  load average: 0.07, 0.18, 0.38
	Linux test-preload-250630 5.15.0-1075-aws #82~20.04.1-Ubuntu SMP Thu Dec 19 05:23:06 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [4f1394d480e5574a2beb7db858c3c26ba60496ada62e90337b9e12a75d2f2b35] <==
	I0127 12:32:15.705503       1 main.go:301] handling current node
	I0127 12:32:25.713005       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 12:32:25.713128       1 main.go:301] handling current node
	I0127 12:32:35.704209       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 12:32:35.704242       1 main.go:301] handling current node
	I0127 12:32:45.705808       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 12:32:45.705917       1 main.go:301] handling current node
	I0127 12:32:55.713022       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 12:32:55.713054       1 main.go:301] handling current node
	I0127 12:33:05.703910       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 12:33:05.703944       1 main.go:301] handling current node
	I0127 12:33:15.704194       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 12:33:15.704227       1 main.go:301] handling current node
	I0127 12:33:25.713008       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 12:33:25.713043       1 main.go:301] handling current node
	I0127 12:33:35.704802       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 12:33:35.704926       1 main.go:301] handling current node
	I0127 12:33:45.705714       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 12:33:45.705750       1 main.go:301] handling current node
	I0127 12:33:55.713247       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 12:33:55.713283       1 main.go:301] handling current node
	I0127 12:34:05.703908       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 12:34:05.703943       1 main.go:301] handling current node
	I0127 12:34:15.703930       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 12:34:15.703970       1 main.go:301] handling current node
	
	
	==> kube-apiserver [59df883a4a0008f4a77b06433578d8c5463f2a5cbdd3ddc5da64256888aa9021] <==
	I0127 11:55:15.170774       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0127 11:55:15.171023       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0127 11:55:15.183471       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0127 11:55:15.206699       1 controller.go:611] quota admission added evaluator for: namespaces
	I0127 11:55:15.545944       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0127 11:55:15.947219       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0127 11:55:15.951026       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0127 11:55:15.951309       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0127 11:55:16.452650       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0127 11:55:16.499308       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0127 11:55:16.655533       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0127 11:55:16.670310       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0127 11:55:16.671571       1 controller.go:611] quota admission added evaluator for: endpoints
	I0127 11:55:16.675637       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0127 11:55:17.220576       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0127 11:55:17.914767       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0127 11:55:17.926547       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0127 11:55:17.939053       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0127 11:55:18.142935       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0127 11:55:31.035009       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0127 11:55:31.708272       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0127 11:55:32.417337       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	W0127 12:08:40.572807       1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted
	W0127 12:18:29.857173       1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted
	W0127 12:31:14.196758       1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted
	
	
	==> kube-controller-manager [0787e7a1c51c2c4d534bbc42cc7965abdddfc6da675afb2695e48f4eae5d8be7] <==
	I0127 11:55:30.724501       1 shared_informer.go:262] Caches are synced for HPA
	I0127 11:55:30.725678       1 shared_informer.go:262] Caches are synced for disruption
	I0127 11:55:30.725694       1 disruption.go:371] Sending events to api server.
	I0127 11:55:30.779212       1 shared_informer.go:262] Caches are synced for taint
	I0127 11:55:30.779390       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0127 11:55:30.779476       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-250630. Assuming now as a timestamp.
	I0127 11:55:30.779247       1 shared_informer.go:262] Caches are synced for daemon sets
	I0127 11:55:30.779617       1 node_lifecycle_controller.go:1165] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0127 11:55:30.779639       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0127 11:55:30.780760       1 event.go:294] "Event occurred" object="test-preload-250630" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-250630 event: Registered Node test-preload-250630 in Controller"
	I0127 11:55:30.788612       1 shared_informer.go:262] Caches are synced for resource quota
	I0127 11:55:30.788651       1 shared_informer.go:262] Caches are synced for resource quota
	I0127 11:55:30.795975       1 event.go:294] "Event occurred" object="kube-system/etcd-test-preload-250630" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0127 11:55:30.797776       1 event.go:294] "Event occurred" object="kube-system/kube-controller-manager-test-preload-250630" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0127 11:55:30.800395       1 event.go:294] "Event occurred" object="kube-system/kube-scheduler-test-preload-250630" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0127 11:55:30.802405       1 event.go:294] "Event occurred" object="kube-system/kube-apiserver-test-preload-250630" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0127 11:55:31.044922       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0127 11:55:31.276185       1 shared_informer.go:262] Caches are synced for garbage collector
	I0127 11:55:31.280248       1 shared_informer.go:262] Caches are synced for garbage collector
	I0127 11:55:31.280351       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0127 11:55:31.724719       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-hgtpg"
	I0127 11:55:31.845634       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-rljhx"
	I0127 11:55:31.845670       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-zg4sc"
	I0127 11:55:31.884673       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-fkkqm"
	I0127 11:55:50.782125       1 node_lifecycle_controller.go:1192] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	
	
	==> kube-proxy [14e1b64f410c74eb923ff2fdb23b5629640d251a6519456c2938bcd627bd2709] <==
	I0127 11:55:32.394546       1 node.go:163] Successfully retrieved node IP: 192.168.76.2
	I0127 11:55:32.394810       1 server_others.go:138] "Detected node IP" address="192.168.76.2"
	I0127 11:55:32.394886       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0127 11:55:32.412368       1 server_others.go:206] "Using iptables Proxier"
	I0127 11:55:32.412485       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0127 11:55:32.412519       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0127 11:55:32.412565       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0127 11:55:32.412628       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0127 11:55:32.412796       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0127 11:55:32.413038       1 server.go:661] "Version info" version="v1.24.4"
	I0127 11:55:32.413084       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 11:55:32.413740       1 config.go:317] "Starting service config controller"
	I0127 11:55:32.413810       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0127 11:55:32.413856       1 config.go:226] "Starting endpoint slice config controller"
	I0127 11:55:32.413898       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0127 11:55:32.414475       1 config.go:444] "Starting node config controller"
	I0127 11:55:32.414527       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0127 11:55:32.514656       1 shared_informer.go:262] Caches are synced for node config
	I0127 11:55:32.514697       1 shared_informer.go:262] Caches are synced for service config
	I0127 11:55:32.514769       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [4d614a2eba620db6c30ed82df820ac0d1bce67d0e2f34786f2482a23d0fd6d45] <==
	W0127 11:55:15.185485       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0127 11:55:15.185560       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0127 11:55:15.185659       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0127 11:55:15.185915       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0127 11:55:15.185746       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0127 11:55:15.186011       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0127 11:55:15.185824       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0127 11:55:15.186083       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0127 11:55:15.186196       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0127 11:55:15.186246       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0127 11:55:15.189439       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0127 11:55:15.189540       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0127 11:55:15.189694       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0127 11:55:15.189738       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0127 11:55:16.041180       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0127 11:55:16.041220       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0127 11:55:16.071886       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0127 11:55:16.072043       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0127 11:55:16.189290       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0127 11:55:16.189326       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0127 11:55:16.229692       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0127 11:55:16.229726       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0127 11:55:16.231810       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0127 11:55:16.231933       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0127 11:55:16.667319       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 27 11:55:48 test-preload-250630 kubelet[2334]: I0127 11:55:48.591481    2334 topology_manager.go:200] "Topology Admit Handler"
	Jan 27 11:55:48 test-preload-250630 kubelet[2334]: I0127 11:55:48.593945    2334 topology_manager.go:200] "Topology Admit Handler"
	Jan 27 11:55:48 test-preload-250630 kubelet[2334]: I0127 11:55:48.662542    2334 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfrxz\" (UniqueName: \"kubernetes.io/projected/089aa46b-565c-4c28-ab5c-ee8612cbd71e-kube-api-access-lfrxz\") pod \"coredns-6d4b75cb6d-zg4sc\" (UID: \"089aa46b-565c-4c28-ab5c-ee8612cbd71e\") " pod="kube-system/coredns-6d4b75cb6d-zg4sc"
	Jan 27 11:55:48 test-preload-250630 kubelet[2334]: I0127 11:55:48.662598    2334 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bvtn\" (UniqueName: \"kubernetes.io/projected/3f3183bc-8ab4-4d96-af53-11e6d4a92b33-kube-api-access-4bvtn\") pod \"coredns-6d4b75cb6d-hgtpg\" (UID: \"3f3183bc-8ab4-4d96-af53-11e6d4a92b33\") " pod="kube-system/coredns-6d4b75cb6d-hgtpg"
	Jan 27 11:55:48 test-preload-250630 kubelet[2334]: I0127 11:55:48.662627    2334 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f3183bc-8ab4-4d96-af53-11e6d4a92b33-config-volume\") pod \"coredns-6d4b75cb6d-hgtpg\" (UID: \"3f3183bc-8ab4-4d96-af53-11e6d4a92b33\") " pod="kube-system/coredns-6d4b75cb6d-hgtpg"
	Jan 27 11:55:48 test-preload-250630 kubelet[2334]: I0127 11:55:48.662655    2334 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3bf73def-3502-4824-b94e-3272ddc86c8e-tmp\") pod \"storage-provisioner\" (UID: \"3bf73def-3502-4824-b94e-3272ddc86c8e\") " pod="kube-system/storage-provisioner"
	Jan 27 11:55:48 test-preload-250630 kubelet[2334]: I0127 11:55:48.662692    2334 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6fmr\" (UniqueName: \"kubernetes.io/projected/3bf73def-3502-4824-b94e-3272ddc86c8e-kube-api-access-f6fmr\") pod \"storage-provisioner\" (UID: \"3bf73def-3502-4824-b94e-3272ddc86c8e\") " pod="kube-system/storage-provisioner"
	Jan 27 11:55:48 test-preload-250630 kubelet[2334]: I0127 11:55:48.662721    2334 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/089aa46b-565c-4c28-ab5c-ee8612cbd71e-config-volume\") pod \"coredns-6d4b75cb6d-zg4sc\" (UID: \"089aa46b-565c-4c28-ab5c-ee8612cbd71e\") " pod="kube-system/coredns-6d4b75cb6d-zg4sc"
	Jan 27 11:55:48 test-preload-250630 kubelet[2334]: W0127 11:55:48.941300    2334 manager.go:1176] Failed to process watch event {EventType:0 Name:/docker/c1511eb3d78419bda886ac0507f39e65f31453337bb684bd2446949958752667/crio-b10a82bfa6ff7019e6b1d1b2f5d24ac9bd1bc5e0b1dfcb898728c01d5f361c29 WatchSource:0}: Error finding container b10a82bfa6ff7019e6b1d1b2f5d24ac9bd1bc5e0b1dfcb898728c01d5f361c29: Status 404 returned error &{%!s(*http.body=&{0x4001009ef0 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x7e6400) %!s(func() error=0x7e6500)}
	Jan 27 11:55:48 test-preload-250630 kubelet[2334]: W0127 11:55:48.951681    2334 manager.go:1176] Failed to process watch event {EventType:0 Name:/docker/c1511eb3d78419bda886ac0507f39e65f31453337bb684bd2446949958752667/crio-16400edc20d490f11d0966f03262fb6f9b331b058cd43b80ef3e80930dd35067 WatchSource:0}: Error finding container 16400edc20d490f11d0966f03262fb6f9b331b058cd43b80ef3e80930dd35067: Status 404 returned error &{%!s(*http.body=&{0x4000b05068 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x7e6400) %!s(func() error=0x7e6500)}
	Jan 27 11:55:49 test-preload-250630 kubelet[2334]: W0127 11:55:49.001032    2334 manager.go:1176] Failed to process watch event {EventType:0 Name:/docker/c1511eb3d78419bda886ac0507f39e65f31453337bb684bd2446949958752667/crio-0f7c590ca60c675e63e079df5e8cf885995f86f2ebadf941e842850aaec584e6 WatchSource:0}: Error finding container 0f7c590ca60c675e63e079df5e8cf885995f86f2ebadf941e842850aaec584e6: Status 404 returned error &{%!s(*http.body=&{0x40004b6540 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x7e6400) %!s(func() error=0x7e6500)}
	Jan 27 12:00:18 test-preload-250630 kubelet[2334]: W0127 12:00:18.305085    2334 machine.go:65] Cannot read vendor id correctly, set empty.
	Jan 27 12:00:18 test-preload-250630 kubelet[2334]: E0127 12:00:18.310388    2334 container_manager_linux.go:510] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /docker/c1511eb3d78419bda886ac0507f39e65f31453337bb684bd2446949958752667, memory: /docker/c1511eb3d78419bda886ac0507f39e65f31453337bb684bd2446949958752667/system.slice/kubelet.service"
	Jan 27 12:05:18 test-preload-250630 kubelet[2334]: W0127 12:05:18.307669    2334 machine.go:65] Cannot read vendor id correctly, set empty.
	Jan 27 12:05:18 test-preload-250630 kubelet[2334]: E0127 12:05:18.312214    2334 container_manager_linux.go:510] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /docker/c1511eb3d78419bda886ac0507f39e65f31453337bb684bd2446949958752667, memory: /docker/c1511eb3d78419bda886ac0507f39e65f31453337bb684bd2446949958752667/system.slice/kubelet.service"
	Jan 27 12:10:18 test-preload-250630 kubelet[2334]: W0127 12:10:18.305320    2334 machine.go:65] Cannot read vendor id correctly, set empty.
	Jan 27 12:10:18 test-preload-250630 kubelet[2334]: E0127 12:10:18.313183    2334 container_manager_linux.go:510] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /docker/c1511eb3d78419bda886ac0507f39e65f31453337bb684bd2446949958752667, memory: /docker/c1511eb3d78419bda886ac0507f39e65f31453337bb684bd2446949958752667/system.slice/kubelet.service"
	Jan 27 12:15:18 test-preload-250630 kubelet[2334]: W0127 12:15:18.305487    2334 machine.go:65] Cannot read vendor id correctly, set empty.
	Jan 27 12:15:18 test-preload-250630 kubelet[2334]: E0127 12:15:18.314160    2334 container_manager_linux.go:510] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /docker/c1511eb3d78419bda886ac0507f39e65f31453337bb684bd2446949958752667, memory: /docker/c1511eb3d78419bda886ac0507f39e65f31453337bb684bd2446949958752667/system.slice/kubelet.service"
	Jan 27 12:20:18 test-preload-250630 kubelet[2334]: W0127 12:20:18.305659    2334 machine.go:65] Cannot read vendor id correctly, set empty.
	Jan 27 12:20:18 test-preload-250630 kubelet[2334]: E0127 12:20:18.315301    2334 container_manager_linux.go:510] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /docker/c1511eb3d78419bda886ac0507f39e65f31453337bb684bd2446949958752667, memory: /docker/c1511eb3d78419bda886ac0507f39e65f31453337bb684bd2446949958752667/system.slice/kubelet.service"
	Jan 27 12:25:18 test-preload-250630 kubelet[2334]: W0127 12:25:18.305995    2334 machine.go:65] Cannot read vendor id correctly, set empty.
	Jan 27 12:25:18 test-preload-250630 kubelet[2334]: E0127 12:25:18.316596    2334 container_manager_linux.go:510] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /docker/c1511eb3d78419bda886ac0507f39e65f31453337bb684bd2446949958752667, memory: /docker/c1511eb3d78419bda886ac0507f39e65f31453337bb684bd2446949958752667/system.slice/kubelet.service"
	Jan 27 12:30:18 test-preload-250630 kubelet[2334]: W0127 12:30:18.305609    2334 machine.go:65] Cannot read vendor id correctly, set empty.
	Jan 27 12:30:18 test-preload-250630 kubelet[2334]: E0127 12:30:18.317453    2334 container_manager_linux.go:510] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /docker/c1511eb3d78419bda886ac0507f39e65f31453337bb684bd2446949958752667, memory: /docker/c1511eb3d78419bda886ac0507f39e65f31453337bb684bd2446949958752667/system.slice/kubelet.service"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p test-preload-250630 -n test-preload-250630
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-250630 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-250630" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-250630
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-250630: (2.376317237s)
--- FAIL: TestPreload (2404.70s)

                                                
                                    
x
+
TestScheduledStopUnix (37.27s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-346906 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-346906 --memory=2048 --driver=docker  --container-runtime=crio: (32.311986083s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-346906 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-346906 -n scheduled-stop-346906
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-346906 --schedule 15s
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:98: process 441506 running but should have been killed on reschedule of stop
panic.go:629: *** TestScheduledStopUnix FAILED at 2025-01-27 12:34:51.796074432 +0000 UTC m=+4630.743758457
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestScheduledStopUnix]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect scheduled-stop-346906
helpers_test.go:235: (dbg) docker inspect scheduled-stop-346906:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f1f1212ec19255abc48440cb0fde8d008228ae72f8ca9ce2d701ec4d18fdd395",
	        "Created": "2025-01-27T12:34:24.446349898Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 439579,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-01-27T12:34:24.614597851Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0434cf58b6dbace281e5de753aa4b2e3fe33dc9a3be53021531403743c3f155a",
	        "ResolvConfPath": "/var/lib/docker/containers/f1f1212ec19255abc48440cb0fde8d008228ae72f8ca9ce2d701ec4d18fdd395/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f1f1212ec19255abc48440cb0fde8d008228ae72f8ca9ce2d701ec4d18fdd395/hostname",
	        "HostsPath": "/var/lib/docker/containers/f1f1212ec19255abc48440cb0fde8d008228ae72f8ca9ce2d701ec4d18fdd395/hosts",
	        "LogPath": "/var/lib/docker/containers/f1f1212ec19255abc48440cb0fde8d008228ae72f8ca9ce2d701ec4d18fdd395/f1f1212ec19255abc48440cb0fde8d008228ae72f8ca9ce2d701ec4d18fdd395-json.log",
	        "Name": "/scheduled-stop-346906",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "scheduled-stop-346906:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "scheduled-stop-346906",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b4af028ebf483ac1222172e4102f2f5eeb558c262321f7f85030f13a941d28f3-init/diff:/var/lib/docker/overlay2/f9679fb4b68b50924b42b41bb8163a036f86217b5bdb257ff1bd6b1d4c169198/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b4af028ebf483ac1222172e4102f2f5eeb558c262321f7f85030f13a941d28f3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b4af028ebf483ac1222172e4102f2f5eeb558c262321f7f85030f13a941d28f3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b4af028ebf483ac1222172e4102f2f5eeb558c262321f7f85030f13a941d28f3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "scheduled-stop-346906",
	                "Source": "/var/lib/docker/volumes/scheduled-stop-346906/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "scheduled-stop-346906",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "scheduled-stop-346906",
	                "name.minikube.sigs.k8s.io": "scheduled-stop-346906",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "977517ebf4f30a00227682e10f81da9f6d51b4b412fcbcc96639d7c0a57f921f",
	            "SandboxKey": "/var/run/docker/netns/977517ebf4f3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33328"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33329"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33332"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33330"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33331"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "scheduled-stop-346906": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "0f9d67615b41965ce8f73491806f0e95f619e4ce0de38b4b456a284036edfd61",
	                    "EndpointID": "cfdb21ec5771acdf3ad58cbfe685ad8d1e7a920ae7de76e558066ac76607e7d9",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "scheduled-stop-346906",
	                        "f1f1212ec192"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-346906 -n scheduled-stop-346906
helpers_test.go:244: <<< TestScheduledStopUnix FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestScheduledStopUnix]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p scheduled-stop-346906 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p scheduled-stop-346906 logs -n 25: (1.243285304s)
helpers_test.go:252: TestScheduledStopUnix logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |                                Args                                |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-868030 ssh -n multinode-868030-m02 sudo cat              | multinode-868030      | jenkins | v1.35.0 | 27 Jan 25 11:50 UTC | 27 Jan 25 11:50 UTC |
	|         | /home/docker/cp-test_multinode-868030-m03_multinode-868030-m02.txt |                       |         |         |                     |                     |
	| node    | multinode-868030 node stop m03                                     | multinode-868030      | jenkins | v1.35.0 | 27 Jan 25 11:50 UTC | 27 Jan 25 11:50 UTC |
	| node    | multinode-868030 node start                                        | multinode-868030      | jenkins | v1.35.0 | 27 Jan 25 11:50 UTC | 27 Jan 25 11:50 UTC |
	|         | m03 -v=7 --alsologtostderr                                         |                       |         |         |                     |                     |
	| node    | list -p multinode-868030                                           | multinode-868030      | jenkins | v1.35.0 | 27 Jan 25 11:50 UTC |                     |
	| stop    | -p multinode-868030                                                | multinode-868030      | jenkins | v1.35.0 | 27 Jan 25 11:50 UTC | 27 Jan 25 11:51 UTC |
	| start   | -p multinode-868030                                                | multinode-868030      | jenkins | v1.35.0 | 27 Jan 25 11:51 UTC | 27 Jan 25 11:52 UTC |
	|         | --wait=true -v=8                                                   |                       |         |         |                     |                     |
	|         | --alsologtostderr                                                  |                       |         |         |                     |                     |
	| node    | list -p multinode-868030                                           | multinode-868030      | jenkins | v1.35.0 | 27 Jan 25 11:52 UTC |                     |
	| node    | multinode-868030 node delete                                       | multinode-868030      | jenkins | v1.35.0 | 27 Jan 25 11:52 UTC | 27 Jan 25 11:52 UTC |
	|         | m03                                                                |                       |         |         |                     |                     |
	| stop    | multinode-868030 stop                                              | multinode-868030      | jenkins | v1.35.0 | 27 Jan 25 11:52 UTC | 27 Jan 25 11:52 UTC |
	| start   | -p multinode-868030                                                | multinode-868030      | jenkins | v1.35.0 | 27 Jan 25 11:52 UTC | 27 Jan 25 11:53 UTC |
	|         | --wait=true -v=8                                                   |                       |         |         |                     |                     |
	|         | --alsologtostderr                                                  |                       |         |         |                     |                     |
	|         | --driver=docker                                                    |                       |         |         |                     |                     |
	|         | --container-runtime=crio                                           |                       |         |         |                     |                     |
	| node    | list -p multinode-868030                                           | multinode-868030      | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC |                     |
	| start   | -p multinode-868030-m02                                            | multinode-868030-m02  | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC |                     |
	|         | --driver=docker                                                    |                       |         |         |                     |                     |
	|         | --container-runtime=crio                                           |                       |         |         |                     |                     |
	| start   | -p multinode-868030-m03                                            | multinode-868030-m03  | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:54 UTC |
	|         | --driver=docker                                                    |                       |         |         |                     |                     |
	|         | --container-runtime=crio                                           |                       |         |         |                     |                     |
	| node    | add -p multinode-868030                                            | multinode-868030      | jenkins | v1.35.0 | 27 Jan 25 11:54 UTC |                     |
	| delete  | -p multinode-868030-m03                                            | multinode-868030-m03  | jenkins | v1.35.0 | 27 Jan 25 11:54 UTC | 27 Jan 25 11:54 UTC |
	| delete  | -p multinode-868030                                                | multinode-868030      | jenkins | v1.35.0 | 27 Jan 25 11:54 UTC | 27 Jan 25 11:54 UTC |
	| start   | -p test-preload-250630                                             | test-preload-250630   | jenkins | v1.35.0 | 27 Jan 25 11:54 UTC |                     |
	|         | --memory=2200                                                      |                       |         |         |                     |                     |
	|         | --alsologtostderr                                                  |                       |         |         |                     |                     |
	|         | --wait=true --preload=false                                        |                       |         |         |                     |                     |
	|         | --driver=docker                                                    |                       |         |         |                     |                     |
	|         | --container-runtime=crio                                           |                       |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                       |                       |         |         |                     |                     |
	| delete  | -p test-preload-250630                                             | test-preload-250630   | jenkins | v1.35.0 | 27 Jan 25 12:34 UTC | 27 Jan 25 12:34 UTC |
	| start   | -p scheduled-stop-346906                                           | scheduled-stop-346906 | jenkins | v1.35.0 | 27 Jan 25 12:34 UTC | 27 Jan 25 12:34 UTC |
	|         | --memory=2048 --driver=docker                                      |                       |         |         |                     |                     |
	|         | --container-runtime=crio                                           |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-346906                                           | scheduled-stop-346906 | jenkins | v1.35.0 | 27 Jan 25 12:34 UTC |                     |
	|         | --schedule 5m                                                      |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-346906                                           | scheduled-stop-346906 | jenkins | v1.35.0 | 27 Jan 25 12:34 UTC |                     |
	|         | --schedule 5m                                                      |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-346906                                           | scheduled-stop-346906 | jenkins | v1.35.0 | 27 Jan 25 12:34 UTC |                     |
	|         | --schedule 5m                                                      |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-346906                                           | scheduled-stop-346906 | jenkins | v1.35.0 | 27 Jan 25 12:34 UTC |                     |
	|         | --schedule 15s                                                     |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-346906                                           | scheduled-stop-346906 | jenkins | v1.35.0 | 27 Jan 25 12:34 UTC |                     |
	|         | --schedule 15s                                                     |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-346906                                           | scheduled-stop-346906 | jenkins | v1.35.0 | 27 Jan 25 12:34 UTC |                     |
	|         | --schedule 15s                                                     |                       |         |         |                     |                     |
	|---------|--------------------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 12:34:19
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 12:34:19.000845  439087 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:34:19.000994  439087 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:34:19.000998  439087 out.go:358] Setting ErrFile to fd 2...
	I0127 12:34:19.001002  439087 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:34:19.001301  439087 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-300538/.minikube/bin
	I0127 12:34:19.001766  439087 out.go:352] Setting JSON to false
	I0127 12:34:19.002736  439087 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11806,"bootTime":1737969453,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0127 12:34:19.002831  439087 start.go:139] virtualization:  
	I0127 12:34:19.006909  439087 out.go:177] * [scheduled-stop-346906] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0127 12:34:19.011014  439087 out.go:177]   - MINIKUBE_LOCATION=20319
	I0127 12:34:19.011125  439087 notify.go:220] Checking for updates...
	I0127 12:34:19.017041  439087 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 12:34:19.020399  439087 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20319-300538/kubeconfig
	I0127 12:34:19.023390  439087 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-300538/.minikube
	I0127 12:34:19.026254  439087 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0127 12:34:19.029232  439087 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 12:34:19.032527  439087 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 12:34:19.057654  439087 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0127 12:34:19.057770  439087 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 12:34:19.122157  439087 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:43 SystemTime:2025-01-27 12:34:19.113019394 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 12:34:19.122255  439087 docker.go:318] overlay module found
	I0127 12:34:19.125427  439087 out.go:177] * Using the docker driver based on user configuration
	I0127 12:34:19.128223  439087 start.go:297] selected driver: docker
	I0127 12:34:19.128232  439087 start.go:901] validating driver "docker" against <nil>
	I0127 12:34:19.128244  439087 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 12:34:19.128996  439087 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 12:34:19.181374  439087 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:43 SystemTime:2025-01-27 12:34:19.172587605 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 12:34:19.181573  439087 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 12:34:19.181801  439087 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0127 12:34:19.184825  439087 out.go:177] * Using Docker driver with root privileges
	I0127 12:34:19.187734  439087 cni.go:84] Creating CNI manager for ""
	I0127 12:34:19.187788  439087 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0127 12:34:19.187796  439087 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0127 12:34:19.187892  439087 start.go:340] cluster config:
	{Name:scheduled-stop-346906 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:scheduled-stop-346906 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:34:19.191080  439087 out.go:177] * Starting "scheduled-stop-346906" primary control-plane node in "scheduled-stop-346906" cluster
	I0127 12:34:19.194086  439087 cache.go:121] Beginning downloading kic base image for docker with crio
	I0127 12:34:19.196890  439087 out.go:177] * Pulling base image v0.0.46 ...
	I0127 12:34:19.199692  439087 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 12:34:19.199733  439087 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20319-300538/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-arm64.tar.lz4
	I0127 12:34:19.199740  439087 cache.go:56] Caching tarball of preloaded images
	I0127 12:34:19.199791  439087 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I0127 12:34:19.199838  439087 preload.go:172] Found /home/jenkins/minikube-integration/20319-300538/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0127 12:34:19.199854  439087 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0127 12:34:19.200217  439087 profile.go:143] Saving config to /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/scheduled-stop-346906/config.json ...
	I0127 12:34:19.200236  439087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/scheduled-stop-346906/config.json: {Name:mka432937ce130def2fba88a537c67c782b80332 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:34:19.218975  439087 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon, skipping pull
	I0127 12:34:19.218985  439087 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in daemon, skipping load
	I0127 12:34:19.219005  439087 cache.go:227] Successfully downloaded all kic artifacts
	I0127 12:34:19.219037  439087 start.go:360] acquireMachinesLock for scheduled-stop-346906: {Name:mk42191bc99dc381987c223f2f9a65fd80af8f91 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:34:19.219186  439087 start.go:364] duration metric: took 135.163µs to acquireMachinesLock for "scheduled-stop-346906"
	I0127 12:34:19.219212  439087 start.go:93] Provisioning new machine with config: &{Name:scheduled-stop-346906 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:scheduled-stop-346906 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 12:34:19.219286  439087 start.go:125] createHost starting for "" (driver="docker")
	I0127 12:34:19.222493  439087 out.go:235] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0127 12:34:19.222745  439087 start.go:159] libmachine.API.Create for "scheduled-stop-346906" (driver="docker")
	I0127 12:34:19.222775  439087 client.go:168] LocalClient.Create starting
	I0127 12:34:19.222861  439087 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20319-300538/.minikube/certs/ca.pem
	I0127 12:34:19.222893  439087 main.go:141] libmachine: Decoding PEM data...
	I0127 12:34:19.222913  439087 main.go:141] libmachine: Parsing certificate...
	I0127 12:34:19.222969  439087 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20319-300538/.minikube/certs/cert.pem
	I0127 12:34:19.222987  439087 main.go:141] libmachine: Decoding PEM data...
	I0127 12:34:19.222996  439087 main.go:141] libmachine: Parsing certificate...
	I0127 12:34:19.223403  439087 cli_runner.go:164] Run: docker network inspect scheduled-stop-346906 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0127 12:34:19.239624  439087 cli_runner.go:211] docker network inspect scheduled-stop-346906 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0127 12:34:19.239705  439087 network_create.go:284] running [docker network inspect scheduled-stop-346906] to gather additional debugging logs...
	I0127 12:34:19.239719  439087 cli_runner.go:164] Run: docker network inspect scheduled-stop-346906
	W0127 12:34:19.254645  439087 cli_runner.go:211] docker network inspect scheduled-stop-346906 returned with exit code 1
	I0127 12:34:19.254669  439087 network_create.go:287] error running [docker network inspect scheduled-stop-346906]: docker network inspect scheduled-stop-346906: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network scheduled-stop-346906 not found
	I0127 12:34:19.254679  439087 network_create.go:289] output of [docker network inspect scheduled-stop-346906]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network scheduled-stop-346906 not found
	
	** /stderr **
	I0127 12:34:19.254782  439087 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0127 12:34:19.271633  439087 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-83a41a4be89e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:bb:86:ff:d6} reservation:<nil>}
	I0127 12:34:19.271990  439087 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b8647f61e26c IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:4f:9a:96:61} reservation:<nil>}
	I0127 12:34:19.272256  439087 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8a54f92038ad IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:21:b4:54:50} reservation:<nil>}
	I0127 12:34:19.272651  439087 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019ca1c0}
	I0127 12:34:19.272668  439087 network_create.go:124] attempt to create docker network scheduled-stop-346906 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0127 12:34:19.272722  439087 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=scheduled-stop-346906 scheduled-stop-346906
	I0127 12:34:19.347741  439087 network_create.go:108] docker network scheduled-stop-346906 192.168.76.0/24 created
	I0127 12:34:19.347765  439087 kic.go:121] calculated static IP "192.168.76.2" for the "scheduled-stop-346906" container
	I0127 12:34:19.347836  439087 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0127 12:34:19.363999  439087 cli_runner.go:164] Run: docker volume create scheduled-stop-346906 --label name.minikube.sigs.k8s.io=scheduled-stop-346906 --label created_by.minikube.sigs.k8s.io=true
	I0127 12:34:19.385235  439087 oci.go:103] Successfully created a docker volume scheduled-stop-346906
	I0127 12:34:19.385314  439087 cli_runner.go:164] Run: docker run --rm --name scheduled-stop-346906-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=scheduled-stop-346906 --entrypoint /usr/bin/test -v scheduled-stop-346906:/var gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -d /var/lib
	I0127 12:34:19.980531  439087 oci.go:107] Successfully prepared a docker volume scheduled-stop-346906
	I0127 12:34:19.980580  439087 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 12:34:19.980598  439087 kic.go:194] Starting extracting preloaded images to volume ...
	I0127 12:34:19.980676  439087 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20319-300538/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v scheduled-stop-346906:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir
	I0127 12:34:24.377605  439087 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20319-300538/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v scheduled-stop-346906:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir: (4.396883537s)
	I0127 12:34:24.377626  439087 kic.go:203] duration metric: took 4.397025239s to extract preloaded images to volume ...
	W0127 12:34:24.377771  439087 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0127 12:34:24.377878  439087 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0127 12:34:24.431326  439087 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname scheduled-stop-346906 --name scheduled-stop-346906 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=scheduled-stop-346906 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=scheduled-stop-346906 --network scheduled-stop-346906 --ip 192.168.76.2 --volume scheduled-stop-346906:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279
	I0127 12:34:24.796831  439087 cli_runner.go:164] Run: docker container inspect scheduled-stop-346906 --format={{.State.Running}}
	I0127 12:34:24.820681  439087 cli_runner.go:164] Run: docker container inspect scheduled-stop-346906 --format={{.State.Status}}
	I0127 12:34:24.848360  439087 cli_runner.go:164] Run: docker exec scheduled-stop-346906 stat /var/lib/dpkg/alternatives/iptables
	I0127 12:34:24.900684  439087 oci.go:144] the created container "scheduled-stop-346906" has a running status.
	I0127 12:34:24.900703  439087 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20319-300538/.minikube/machines/scheduled-stop-346906/id_rsa...
	I0127 12:34:25.086357  439087 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20319-300538/.minikube/machines/scheduled-stop-346906/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0127 12:34:25.107592  439087 cli_runner.go:164] Run: docker container inspect scheduled-stop-346906 --format={{.State.Status}}
	I0127 12:34:25.131713  439087 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0127 12:34:25.131894  439087 kic_runner.go:114] Args: [docker exec --privileged scheduled-stop-346906 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0127 12:34:25.191317  439087 cli_runner.go:164] Run: docker container inspect scheduled-stop-346906 --format={{.State.Status}}
	I0127 12:34:25.213667  439087 machine.go:93] provisionDockerMachine start ...
	I0127 12:34:25.213751  439087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-346906
	I0127 12:34:25.238637  439087 main.go:141] libmachine: Using SSH client type: native
	I0127 12:34:25.238891  439087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 33328 <nil> <nil>}
	I0127 12:34:25.238899  439087 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 12:34:25.242090  439087 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0127 12:34:28.370843  439087 main.go:141] libmachine: SSH cmd err, output: <nil>: scheduled-stop-346906
	
	I0127 12:34:28.370859  439087 ubuntu.go:169] provisioning hostname "scheduled-stop-346906"
	I0127 12:34:28.370933  439087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-346906
	I0127 12:34:28.388817  439087 main.go:141] libmachine: Using SSH client type: native
	I0127 12:34:28.389054  439087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 33328 <nil> <nil>}
	I0127 12:34:28.389063  439087 main.go:141] libmachine: About to run SSH command:
	sudo hostname scheduled-stop-346906 && echo "scheduled-stop-346906" | sudo tee /etc/hostname
	I0127 12:34:28.523206  439087 main.go:141] libmachine: SSH cmd err, output: <nil>: scheduled-stop-346906
	
	I0127 12:34:28.523278  439087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-346906
	I0127 12:34:28.541656  439087 main.go:141] libmachine: Using SSH client type: native
	I0127 12:34:28.541897  439087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 33328 <nil> <nil>}
	I0127 12:34:28.541912  439087 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sscheduled-stop-346906' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 scheduled-stop-346906/g' /etc/hosts;
				else 
					echo '127.0.1.1 scheduled-stop-346906' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 12:34:28.663236  439087 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 12:34:28.663254  439087 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20319-300538/.minikube CaCertPath:/home/jenkins/minikube-integration/20319-300538/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20319-300538/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20319-300538/.minikube}
	I0127 12:34:28.663280  439087 ubuntu.go:177] setting up certificates
	I0127 12:34:28.663288  439087 provision.go:84] configureAuth start
	I0127 12:34:28.663347  439087 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-346906
	I0127 12:34:28.680883  439087 provision.go:143] copyHostCerts
	I0127 12:34:28.680941  439087 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-300538/.minikube/ca.pem, removing ...
	I0127 12:34:28.680949  439087 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-300538/.minikube/ca.pem
	I0127 12:34:28.681059  439087 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-300538/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20319-300538/.minikube/ca.pem (1082 bytes)
	I0127 12:34:28.681161  439087 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-300538/.minikube/cert.pem, removing ...
	I0127 12:34:28.681166  439087 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-300538/.minikube/cert.pem
	I0127 12:34:28.681191  439087 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-300538/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20319-300538/.minikube/cert.pem (1123 bytes)
	I0127 12:34:28.681241  439087 exec_runner.go:144] found /home/jenkins/minikube-integration/20319-300538/.minikube/key.pem, removing ...
	I0127 12:34:28.681245  439087 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20319-300538/.minikube/key.pem
	I0127 12:34:28.681272  439087 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20319-300538/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20319-300538/.minikube/key.pem (1679 bytes)
	I0127 12:34:28.681317  439087 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20319-300538/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20319-300538/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20319-300538/.minikube/certs/ca-key.pem org=jenkins.scheduled-stop-346906 san=[127.0.0.1 192.168.76.2 localhost minikube scheduled-stop-346906]
	I0127 12:34:29.220285  439087 provision.go:177] copyRemoteCerts
	I0127 12:34:29.220342  439087 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 12:34:29.220381  439087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-346906
	I0127 12:34:29.237308  439087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33328 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/scheduled-stop-346906/id_rsa Username:docker}
	I0127 12:34:29.327654  439087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0127 12:34:29.351010  439087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 12:34:29.374660  439087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0127 12:34:29.398315  439087 provision.go:87] duration metric: took 735.014301ms to configureAuth
	I0127 12:34:29.398333  439087 ubuntu.go:193] setting minikube options for container-runtime
	I0127 12:34:29.398523  439087 config.go:182] Loaded profile config "scheduled-stop-346906": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:34:29.398625  439087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-346906
	I0127 12:34:29.415548  439087 main.go:141] libmachine: Using SSH client type: native
	I0127 12:34:29.415797  439087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 33328 <nil> <nil>}
	I0127 12:34:29.415810  439087 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 12:34:29.643701  439087 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 12:34:29.643716  439087 machine.go:96] duration metric: took 4.430038478s to provisionDockerMachine
	I0127 12:34:29.643725  439087 client.go:171] duration metric: took 10.420945805s to LocalClient.Create
	I0127 12:34:29.643737  439087 start.go:167] duration metric: took 10.42099301s to libmachine.API.Create "scheduled-stop-346906"
	I0127 12:34:29.643743  439087 start.go:293] postStartSetup for "scheduled-stop-346906" (driver="docker")
	I0127 12:34:29.643753  439087 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 12:34:29.643824  439087 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 12:34:29.643880  439087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-346906
	I0127 12:34:29.661760  439087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33328 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/scheduled-stop-346906/id_rsa Username:docker}
	I0127 12:34:29.752288  439087 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 12:34:29.755460  439087 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0127 12:34:29.755486  439087 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0127 12:34:29.755496  439087 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0127 12:34:29.755502  439087 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0127 12:34:29.755511  439087 filesync.go:126] Scanning /home/jenkins/minikube-integration/20319-300538/.minikube/addons for local assets ...
	I0127 12:34:29.755569  439087 filesync.go:126] Scanning /home/jenkins/minikube-integration/20319-300538/.minikube/files for local assets ...
	I0127 12:34:29.755650  439087 filesync.go:149] local asset: /home/jenkins/minikube-integration/20319-300538/.minikube/files/etc/ssl/certs/3059362.pem -> 3059362.pem in /etc/ssl/certs
	I0127 12:34:29.755750  439087 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 12:34:29.764262  439087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/files/etc/ssl/certs/3059362.pem --> /etc/ssl/certs/3059362.pem (1708 bytes)
	I0127 12:34:29.789097  439087 start.go:296] duration metric: took 145.339537ms for postStartSetup
	I0127 12:34:29.789469  439087 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-346906
	I0127 12:34:29.806157  439087 profile.go:143] Saving config to /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/scheduled-stop-346906/config.json ...
	I0127 12:34:29.806442  439087 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 12:34:29.806501  439087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-346906
	I0127 12:34:29.823295  439087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33328 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/scheduled-stop-346906/id_rsa Username:docker}
	I0127 12:34:29.907884  439087 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0127 12:34:29.912128  439087 start.go:128] duration metric: took 10.692827227s to createHost
	I0127 12:34:29.912141  439087 start.go:83] releasing machines lock for "scheduled-stop-346906", held for 10.692948309s
	I0127 12:34:29.912211  439087 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-346906
	I0127 12:34:29.928596  439087 ssh_runner.go:195] Run: cat /version.json
	I0127 12:34:29.928645  439087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-346906
	I0127 12:34:29.928665  439087 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 12:34:29.928733  439087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-346906
	I0127 12:34:29.953116  439087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33328 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/scheduled-stop-346906/id_rsa Username:docker}
	I0127 12:34:29.956736  439087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33328 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/scheduled-stop-346906/id_rsa Username:docker}
	I0127 12:34:30.050710  439087 ssh_runner.go:195] Run: systemctl --version
	I0127 12:34:30.190643  439087 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 12:34:30.331715  439087 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0127 12:34:30.336035  439087 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 12:34:30.359231  439087 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0127 12:34:30.359320  439087 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 12:34:30.396989  439087 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0127 12:34:30.397002  439087 start.go:495] detecting cgroup driver to use...
	I0127 12:34:30.397045  439087 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0127 12:34:30.397095  439087 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 12:34:30.413893  439087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 12:34:30.425249  439087 docker.go:217] disabling cri-docker service (if available) ...
	I0127 12:34:30.425306  439087 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 12:34:30.439801  439087 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 12:34:30.454650  439087 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 12:34:30.545006  439087 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 12:34:30.641631  439087 docker.go:233] disabling docker service ...
	I0127 12:34:30.641690  439087 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 12:34:30.663270  439087 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 12:34:30.675963  439087 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 12:34:30.758798  439087 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 12:34:30.849266  439087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 12:34:30.861236  439087 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 12:34:30.877578  439087 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0127 12:34:30.877637  439087 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:34:30.887702  439087 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 12:34:30.887766  439087 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:34:30.897877  439087 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:34:30.908341  439087 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:34:30.918997  439087 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 12:34:30.928342  439087 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:34:30.938367  439087 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:34:30.956912  439087 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:34:30.968806  439087 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 12:34:30.977322  439087 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 12:34:30.985756  439087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:34:31.069400  439087 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 12:34:31.191116  439087 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 12:34:31.191179  439087 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 12:34:31.194690  439087 start.go:563] Will wait 60s for crictl version
	I0127 12:34:31.194755  439087 ssh_runner.go:195] Run: which crictl
	I0127 12:34:31.198572  439087 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 12:34:31.243889  439087 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0127 12:34:31.243972  439087 ssh_runner.go:195] Run: crio --version
	I0127 12:34:31.286268  439087 ssh_runner.go:195] Run: crio --version
	I0127 12:34:31.328989  439087 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.24.6 ...
	I0127 12:34:31.331819  439087 cli_runner.go:164] Run: docker network inspect scheduled-stop-346906 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0127 12:34:31.348473  439087 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0127 12:34:31.352040  439087 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 12:34:31.362944  439087 kubeadm.go:883] updating cluster {Name:scheduled-stop-346906 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:scheduled-stop-346906 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 12:34:31.363048  439087 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 12:34:31.363132  439087 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 12:34:31.445381  439087 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 12:34:31.445394  439087 crio.go:433] Images already preloaded, skipping extraction
	I0127 12:34:31.445449  439087 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 12:34:31.480502  439087 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 12:34:31.480514  439087 cache_images.go:84] Images are preloaded, skipping loading
	I0127 12:34:31.480520  439087 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.32.1 crio true true} ...
	I0127 12:34:31.480603  439087 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=scheduled-stop-346906 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:scheduled-stop-346906 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 12:34:31.480675  439087 ssh_runner.go:195] Run: crio config
	I0127 12:34:31.535153  439087 cni.go:84] Creating CNI manager for ""
	I0127 12:34:31.535165  439087 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0127 12:34:31.535173  439087 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 12:34:31.535196  439087 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:scheduled-stop-346906 NodeName:scheduled-stop-346906 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 12:34:31.535322  439087 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "scheduled-stop-346906"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 12:34:31.535389  439087 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 12:34:31.544129  439087 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 12:34:31.544192  439087 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 12:34:31.552549  439087 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (371 bytes)
	I0127 12:34:31.570228  439087 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 12:34:31.588057  439087 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I0127 12:34:31.605125  439087 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0127 12:34:31.608230  439087 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 12:34:31.618592  439087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:34:31.711462  439087 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:34:31.725301  439087 certs.go:68] Setting up /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/scheduled-stop-346906 for IP: 192.168.76.2
	I0127 12:34:31.725312  439087 certs.go:194] generating shared ca certs ...
	I0127 12:34:31.725328  439087 certs.go:226] acquiring lock for ca certs: {Name:mk949cfe0d73736f3d2e354b486773524a8fcbe7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:34:31.725465  439087 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20319-300538/.minikube/ca.key
	I0127 12:34:31.725507  439087 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20319-300538/.minikube/proxy-client-ca.key
	I0127 12:34:31.725513  439087 certs.go:256] generating profile certs ...
	I0127 12:34:31.725564  439087 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/scheduled-stop-346906/client.key
	I0127 12:34:31.725584  439087 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/scheduled-stop-346906/client.crt with IP's: []
	I0127 12:34:32.512861  439087 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/scheduled-stop-346906/client.crt ...
	I0127 12:34:32.512877  439087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/scheduled-stop-346906/client.crt: {Name:mkf1863b0c379643e3c68500ad5d573f1062ec7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:34:32.513085  439087 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/scheduled-stop-346906/client.key ...
	I0127 12:34:32.513095  439087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/scheduled-stop-346906/client.key: {Name:mkb80664225df775a1d24bfdab8bea794d8aa148 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:34:32.513192  439087 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/scheduled-stop-346906/apiserver.key.92e7b4cc
	I0127 12:34:32.513208  439087 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/scheduled-stop-346906/apiserver.crt.92e7b4cc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0127 12:34:33.427941  439087 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/scheduled-stop-346906/apiserver.crt.92e7b4cc ...
	I0127 12:34:33.427960  439087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/scheduled-stop-346906/apiserver.crt.92e7b4cc: {Name:mkd84353aa12d37ea268a6fbe9bde548c7983211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:34:33.428185  439087 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/scheduled-stop-346906/apiserver.key.92e7b4cc ...
	I0127 12:34:33.428194  439087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/scheduled-stop-346906/apiserver.key.92e7b4cc: {Name:mkeb3fe8fb68d81f92605978516f0aa33f31a137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:34:33.428288  439087 certs.go:381] copying /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/scheduled-stop-346906/apiserver.crt.92e7b4cc -> /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/scheduled-stop-346906/apiserver.crt
	I0127 12:34:33.428364  439087 certs.go:385] copying /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/scheduled-stop-346906/apiserver.key.92e7b4cc -> /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/scheduled-stop-346906/apiserver.key
	I0127 12:34:33.428417  439087 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/scheduled-stop-346906/proxy-client.key
	I0127 12:34:33.428429  439087 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/scheduled-stop-346906/proxy-client.crt with IP's: []
	I0127 12:34:33.567533  439087 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/scheduled-stop-346906/proxy-client.crt ...
	I0127 12:34:33.567548  439087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/scheduled-stop-346906/proxy-client.crt: {Name:mk363bb791978b055570a3bbaf88fe3130463181 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:34:33.567777  439087 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/scheduled-stop-346906/proxy-client.key ...
	I0127 12:34:33.567785  439087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/scheduled-stop-346906/proxy-client.key: {Name:mk7f52e1a029175a2599458932ca8d351a63b252 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:34:33.567981  439087 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-300538/.minikube/certs/305936.pem (1338 bytes)
	W0127 12:34:33.568017  439087 certs.go:480] ignoring /home/jenkins/minikube-integration/20319-300538/.minikube/certs/305936_empty.pem, impossibly tiny 0 bytes
	I0127 12:34:33.568024  439087 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-300538/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 12:34:33.568046  439087 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-300538/.minikube/certs/ca.pem (1082 bytes)
	I0127 12:34:33.568069  439087 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-300538/.minikube/certs/cert.pem (1123 bytes)
	I0127 12:34:33.568091  439087 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-300538/.minikube/certs/key.pem (1679 bytes)
	I0127 12:34:33.568132  439087 certs.go:484] found cert: /home/jenkins/minikube-integration/20319-300538/.minikube/files/etc/ssl/certs/3059362.pem (1708 bytes)
	I0127 12:34:33.568733  439087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 12:34:33.593486  439087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0127 12:34:33.618470  439087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 12:34:33.642836  439087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0127 12:34:33.667324  439087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/scheduled-stop-346906/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0127 12:34:33.691576  439087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/scheduled-stop-346906/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 12:34:33.718154  439087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/scheduled-stop-346906/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 12:34:33.742683  439087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/scheduled-stop-346906/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 12:34:33.769098  439087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/certs/305936.pem --> /usr/share/ca-certificates/305936.pem (1338 bytes)
	I0127 12:34:33.794302  439087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/files/etc/ssl/certs/3059362.pem --> /usr/share/ca-certificates/3059362.pem (1708 bytes)
	I0127 12:34:33.818369  439087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20319-300538/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 12:34:33.843004  439087 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 12:34:33.861681  439087 ssh_runner.go:195] Run: openssl version
	I0127 12:34:33.867446  439087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/305936.pem && ln -fs /usr/share/ca-certificates/305936.pem /etc/ssl/certs/305936.pem"
	I0127 12:34:33.877336  439087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/305936.pem
	I0127 12:34:33.880984  439087 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 11:26 /usr/share/ca-certificates/305936.pem
	I0127 12:34:33.881046  439087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/305936.pem
	I0127 12:34:33.888181  439087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/305936.pem /etc/ssl/certs/51391683.0"
	I0127 12:34:33.897696  439087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3059362.pem && ln -fs /usr/share/ca-certificates/3059362.pem /etc/ssl/certs/3059362.pem"
	I0127 12:34:33.907375  439087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3059362.pem
	I0127 12:34:33.911098  439087 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 11:26 /usr/share/ca-certificates/3059362.pem
	I0127 12:34:33.911166  439087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3059362.pem
	I0127 12:34:33.918230  439087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3059362.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 12:34:33.927869  439087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 12:34:33.937333  439087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:34:33.940949  439087 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 11:18 /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:34:33.941005  439087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:34:33.948337  439087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 12:34:33.957984  439087 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 12:34:33.961862  439087 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0127 12:34:33.961918  439087 kubeadm.go:392] StartCluster: {Name:scheduled-stop-346906 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:scheduled-stop-346906 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0127 12:34:33.961987  439087 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 12:34:33.962059  439087 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 12:34:34.014492  439087 cri.go:89] found id: ""
	I0127 12:34:34.014567  439087 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 12:34:34.024587  439087 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 12:34:34.033883  439087 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0127 12:34:34.033938  439087 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 12:34:34.043355  439087 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 12:34:34.043365  439087 kubeadm.go:157] found existing configuration files:
	
	I0127 12:34:34.043419  439087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 12:34:34.052620  439087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 12:34:34.052676  439087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 12:34:34.061706  439087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 12:34:34.070564  439087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 12:34:34.070622  439087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 12:34:34.079306  439087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 12:34:34.088525  439087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 12:34:34.088580  439087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 12:34:34.097507  439087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 12:34:34.106996  439087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 12:34:34.107057  439087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 12:34:34.115993  439087 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0127 12:34:34.174357  439087 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 12:34:34.174831  439087 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 12:34:34.200070  439087 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0127 12:34:34.200135  439087 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1075-aws
	I0127 12:34:34.200170  439087 kubeadm.go:310] OS: Linux
	I0127 12:34:34.200215  439087 kubeadm.go:310] CGROUPS_CPU: enabled
	I0127 12:34:34.200262  439087 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0127 12:34:34.200308  439087 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0127 12:34:34.200355  439087 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0127 12:34:34.200402  439087 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0127 12:34:34.200449  439087 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0127 12:34:34.200493  439087 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0127 12:34:34.200540  439087 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0127 12:34:34.200585  439087 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0127 12:34:34.263014  439087 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 12:34:34.263195  439087 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 12:34:34.263286  439087 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 12:34:34.271548  439087 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 12:34:34.274910  439087 out.go:235]   - Generating certificates and keys ...
	I0127 12:34:34.275100  439087 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 12:34:34.275180  439087 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 12:34:34.668161  439087 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0127 12:34:35.134174  439087 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0127 12:34:35.730532  439087 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0127 12:34:36.267502  439087 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0127 12:34:37.563385  439087 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0127 12:34:37.563649  439087 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost scheduled-stop-346906] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0127 12:34:37.956907  439087 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0127 12:34:37.957178  439087 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost scheduled-stop-346906] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0127 12:34:38.391753  439087 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0127 12:34:38.897164  439087 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0127 12:34:39.254519  439087 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0127 12:34:39.254725  439087 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 12:34:39.900461  439087 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 12:34:40.114455  439087 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 12:34:40.384742  439087 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 12:34:40.783642  439087 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 12:34:40.943377  439087 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 12:34:40.944488  439087 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 12:34:40.956710  439087 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 12:34:40.962097  439087 out.go:235]   - Booting up control plane ...
	I0127 12:34:40.962205  439087 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 12:34:40.962280  439087 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 12:34:40.962357  439087 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 12:34:40.972888  439087 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 12:34:40.980134  439087 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 12:34:40.980188  439087 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 12:34:41.070739  439087 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 12:34:41.070854  439087 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 12:34:42.076095  439087 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.005014249s
	I0127 12:34:42.076177  439087 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 12:34:48.077397  439087 kubeadm.go:310] [api-check] The API server is healthy after 6.001686152s
	I0127 12:34:48.098664  439087 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 12:34:48.113299  439087 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 12:34:48.142370  439087 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 12:34:48.142574  439087 kubeadm.go:310] [mark-control-plane] Marking the node scheduled-stop-346906 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 12:34:48.152956  439087 kubeadm.go:310] [bootstrap-token] Using token: rp5254.1rxay9ho36bxb3jl
	I0127 12:34:48.155874  439087 out.go:235]   - Configuring RBAC rules ...
	I0127 12:34:48.156003  439087 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 12:34:48.161123  439087 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 12:34:48.175175  439087 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 12:34:48.181921  439087 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 12:34:48.186440  439087 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 12:34:48.190483  439087 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 12:34:48.484403  439087 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 12:34:48.927449  439087 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 12:34:49.484669  439087 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 12:34:49.485871  439087 kubeadm.go:310] 
	I0127 12:34:49.485938  439087 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 12:34:49.485945  439087 kubeadm.go:310] 
	I0127 12:34:49.486021  439087 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 12:34:49.486025  439087 kubeadm.go:310] 
	I0127 12:34:49.486055  439087 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 12:34:49.486113  439087 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 12:34:49.486163  439087 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 12:34:49.486166  439087 kubeadm.go:310] 
	I0127 12:34:49.486224  439087 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 12:34:49.486227  439087 kubeadm.go:310] 
	I0127 12:34:49.486274  439087 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 12:34:49.486278  439087 kubeadm.go:310] 
	I0127 12:34:49.486329  439087 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 12:34:49.486403  439087 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 12:34:49.486470  439087 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 12:34:49.486473  439087 kubeadm.go:310] 
	I0127 12:34:49.486556  439087 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 12:34:49.486633  439087 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 12:34:49.486636  439087 kubeadm.go:310] 
	I0127 12:34:49.486719  439087 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token rp5254.1rxay9ho36bxb3jl \
	I0127 12:34:49.486822  439087 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:48e67245c146e975d09f55afb746c1ac0255b920d803cd458fd330de66b03567 \
	I0127 12:34:49.486841  439087 kubeadm.go:310] 	--control-plane 
	I0127 12:34:49.486844  439087 kubeadm.go:310] 
	I0127 12:34:49.486928  439087 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 12:34:49.486932  439087 kubeadm.go:310] 
	I0127 12:34:49.487013  439087 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token rp5254.1rxay9ho36bxb3jl \
	I0127 12:34:49.487135  439087 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:48e67245c146e975d09f55afb746c1ac0255b920d803cd458fd330de66b03567 
	I0127 12:34:49.490096  439087 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0127 12:34:49.490344  439087 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1075-aws\n", err: exit status 1
	I0127 12:34:49.490461  439087 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 12:34:49.490676  439087 cni.go:84] Creating CNI manager for ""
	I0127 12:34:49.490685  439087 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0127 12:34:49.493733  439087 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0127 12:34:49.496623  439087 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0127 12:34:49.501378  439087 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.1/kubectl ...
	I0127 12:34:49.501389  439087 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0127 12:34:49.520675  439087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0127 12:34:49.817412  439087 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 12:34:49.817511  439087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:34:49.817549  439087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes scheduled-stop-346906 minikube.k8s.io/updated_at=2025_01_27T12_34_49_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=35c230aa12d4986001aef5f6e29069f3bc5493aa minikube.k8s.io/name=scheduled-stop-346906 minikube.k8s.io/primary=true
	I0127 12:34:49.994445  439087 ops.go:34] apiserver oom_adj: -16
	I0127 12:34:49.994465  439087 kubeadm.go:1113] duration metric: took 177.023757ms to wait for elevateKubeSystemPrivileges
	I0127 12:34:49.994476  439087 kubeadm.go:394] duration metric: took 16.032561919s to StartCluster
	I0127 12:34:49.994491  439087 settings.go:142] acquiring lock: {Name:mk59e26dfc61a439e501d9ae8e7cbc4a6f05e310 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:34:49.994554  439087 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20319-300538/kubeconfig
	I0127 12:34:49.995261  439087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20319-300538/kubeconfig: {Name:mka2258aa0d8dec49c19d97bc831e58d42b19053 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:34:49.995485  439087 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 12:34:49.995593  439087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0127 12:34:49.995806  439087 config.go:182] Loaded profile config "scheduled-stop-346906": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:34:49.995835  439087 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 12:34:49.995892  439087 addons.go:69] Setting storage-provisioner=true in profile "scheduled-stop-346906"
	I0127 12:34:49.995908  439087 addons.go:238] Setting addon storage-provisioner=true in "scheduled-stop-346906"
	I0127 12:34:49.995930  439087 host.go:66] Checking if "scheduled-stop-346906" exists ...
	I0127 12:34:49.996402  439087 cli_runner.go:164] Run: docker container inspect scheduled-stop-346906 --format={{.State.Status}}
	I0127 12:34:49.996642  439087 addons.go:69] Setting default-storageclass=true in profile "scheduled-stop-346906"
	I0127 12:34:49.996658  439087 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "scheduled-stop-346906"
	I0127 12:34:49.996913  439087 cli_runner.go:164] Run: docker container inspect scheduled-stop-346906 --format={{.State.Status}}
	I0127 12:34:50.006399  439087 out.go:177] * Verifying Kubernetes components...
	I0127 12:34:50.012287  439087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:34:50.055865  439087 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 12:34:50.059227  439087 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 12:34:50.059239  439087 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 12:34:50.059311  439087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-346906
	I0127 12:34:50.068564  439087 addons.go:238] Setting addon default-storageclass=true in "scheduled-stop-346906"
	I0127 12:34:50.068593  439087 host.go:66] Checking if "scheduled-stop-346906" exists ...
	I0127 12:34:50.069013  439087 cli_runner.go:164] Run: docker container inspect scheduled-stop-346906 --format={{.State.Status}}
	I0127 12:34:50.103635  439087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33328 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/scheduled-stop-346906/id_rsa Username:docker}
	I0127 12:34:50.115296  439087 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 12:34:50.115313  439087 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 12:34:50.115382  439087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-346906
	I0127 12:34:50.145913  439087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33328 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/scheduled-stop-346906/id_rsa Username:docker}
	I0127 12:34:50.245820  439087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0127 12:34:50.303480  439087 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:34:50.349183  439087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 12:34:50.356570  439087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 12:34:50.676556  439087 start.go:971] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I0127 12:34:50.678463  439087 api_server.go:52] waiting for apiserver process to appear ...
	I0127 12:34:50.678511  439087 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:34:50.909438  439087 api_server.go:72] duration metric: took 913.90606ms to wait for apiserver process to appear ...
	I0127 12:34:50.909454  439087 api_server.go:88] waiting for apiserver healthz status ...
	I0127 12:34:50.909474  439087 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0127 12:34:50.915025  439087 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0127 12:34:50.917877  439087 addons.go:514] duration metric: took 922.018529ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0127 12:34:50.922732  439087 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0127 12:34:50.923937  439087 api_server.go:141] control plane version: v1.32.1
	I0127 12:34:50.923952  439087 api_server.go:131] duration metric: took 14.493211ms to wait for apiserver health ...
	I0127 12:34:50.923959  439087 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 12:34:50.930589  439087 system_pods.go:59] 5 kube-system pods found
	I0127 12:34:50.930610  439087 system_pods.go:61] "etcd-scheduled-stop-346906" [cfd473b1-710b-4761-859a-94a1665c3bc0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 12:34:50.930619  439087 system_pods.go:61] "kube-apiserver-scheduled-stop-346906" [a5c658fc-83f4-4556-8e66-1c87f595d412] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 12:34:50.930625  439087 system_pods.go:61] "kube-controller-manager-scheduled-stop-346906" [835bf9ad-f2be-4f73-aef2-3c461bcc1237] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 12:34:50.930631  439087 system_pods.go:61] "kube-scheduler-scheduled-stop-346906" [c641fc9a-e1ab-4ca4-a964-9c52ace7a2a0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 12:34:50.930635  439087 system_pods.go:61] "storage-provisioner" [2503edfd-12ec-49d5-9e11-1f91ada16775] Pending
	I0127 12:34:50.930641  439087 system_pods.go:74] duration metric: took 6.676339ms to wait for pod list to return data ...
	I0127 12:34:50.930651  439087 kubeadm.go:582] duration metric: took 935.143212ms to wait for: map[apiserver:true system_pods:true]
	I0127 12:34:50.930662  439087 node_conditions.go:102] verifying NodePressure condition ...
	I0127 12:34:50.939220  439087 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0127 12:34:50.939239  439087 node_conditions.go:123] node cpu capacity is 2
	I0127 12:34:50.939249  439087 node_conditions.go:105] duration metric: took 8.584123ms to run NodePressure ...
	I0127 12:34:50.939261  439087 start.go:241] waiting for startup goroutines ...
	I0127 12:34:51.180148  439087 kapi.go:214] "coredns" deployment in "kube-system" namespace and "scheduled-stop-346906" context rescaled to 1 replicas
	I0127 12:34:51.180173  439087 start.go:246] waiting for cluster config update ...
	I0127 12:34:51.180184  439087 start.go:255] writing updated cluster config ...
	I0127 12:34:51.180510  439087 ssh_runner.go:195] Run: rm -f paused
	I0127 12:34:51.241189  439087 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0127 12:34:51.244514  439087 out.go:177] * Done! kubectl is now configured to use "scheduled-stop-346906" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 27 12:34:42 scheduled-stop-346906 crio[980]: time="2025-01-27 12:34:42.619985960Z" level=warning msg="Allowed annotations are specified for workload []"
	Jan 27 12:34:42 scheduled-stop-346906 crio[980]: time="2025-01-27 12:34:42.620919396Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.32.1" id=cf7d384b-f00d-4926-8aa0-8404cb4f8b91 name=/runtime.v1.ImageService/ImageStatus
	Jan 27 12:34:42 scheduled-stop-346906 crio[980]: time="2025-01-27 12:34:42.620975970Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.32.1" id=5cfa0f72-5d64-4412-aa91-305628343153 name=/runtime.v1.ImageService/ImageStatus
	Jan 27 12:34:42 scheduled-stop-346906 crio[980]: time="2025-01-27 12:34:42.627910516Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c,RepoTags:[registry.k8s.io/kube-scheduler:v1.32.1],RepoDigests:[registry.k8s.io/kube-scheduler@sha256:244bf1ea0194bd050a2408898d65b6a3259624fdd5a3541788b40b4e94c02fc1 registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e],Size_:68973892,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=cf7d384b-f00d-4926-8aa0-8404cb4f8b91 name=/runtime.v1.ImageService/ImageStatus
	Jan 27 12:34:42 scheduled-stop-346906 crio[980]: time="2025-01-27 12:34:42.628872440Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19,RepoTags:[registry.k8s.io/kube-apiserver:v1.32.1],RepoDigests:[registry.k8s.io/kube-apiserver@sha256:88154e5cc4415415c0cbfb49ad1d63ea2de74614b7b567d5f344c5bcb5c5f244 registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac],Size_:94991840,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=5cfa0f72-5d64-4412-aa91-305628343153 name=/runtime.v1.ImageService/ImageStatus
	Jan 27 12:34:42 scheduled-stop-346906 crio[980]: time="2025-01-27 12:34:42.629054593Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.32.1" id=1f2027a9-fc11-46f4-b45c-9e99289642df name=/runtime.v1.ImageService/ImageStatus
	Jan 27 12:34:42 scheduled-stop-346906 crio[980]: time="2025-01-27 12:34:42.629212976Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c,RepoTags:[registry.k8s.io/kube-scheduler:v1.32.1],RepoDigests:[registry.k8s.io/kube-scheduler@sha256:244bf1ea0194bd050a2408898d65b6a3259624fdd5a3541788b40b4e94c02fc1 registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e],Size_:68973892,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=1f2027a9-fc11-46f4-b45c-9e99289642df name=/runtime.v1.ImageService/ImageStatus
	Jan 27 12:34:42 scheduled-stop-346906 crio[980]: time="2025-01-27 12:34:42.631235245Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.32.1" id=c0352af2-816e-4ca9-909f-87c15071fd46 name=/runtime.v1.ImageService/ImageStatus
	Jan 27 12:34:42 scheduled-stop-346906 crio[980]: time="2025-01-27 12:34:42.631436689Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19,RepoTags:[registry.k8s.io/kube-apiserver:v1.32.1],RepoDigests:[registry.k8s.io/kube-apiserver@sha256:88154e5cc4415415c0cbfb49ad1d63ea2de74614b7b567d5f344c5bcb5c5f244 registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac],Size_:94991840,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=c0352af2-816e-4ca9-909f-87c15071fd46 name=/runtime.v1.ImageService/ImageStatus
	Jan 27 12:34:42 scheduled-stop-346906 crio[980]: time="2025-01-27 12:34:42.631761274Z" level=info msg="Creating container: kube-system/kube-scheduler-scheduled-stop-346906/kube-scheduler" id=86ad5a6e-b9c9-49fe-b904-ca4a3a8dfcd1 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 27 12:34:42 scheduled-stop-346906 crio[980]: time="2025-01-27 12:34:42.631967386Z" level=warning msg="Allowed annotations are specified for workload []"
	Jan 27 12:34:42 scheduled-stop-346906 crio[980]: time="2025-01-27 12:34:42.632064928Z" level=info msg="Creating container: kube-system/kube-apiserver-scheduled-stop-346906/kube-apiserver" id=7c943f5c-b14f-482a-ac58-e12e926d98e4 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 27 12:34:42 scheduled-stop-346906 crio[980]: time="2025-01-27 12:34:42.632142376Z" level=warning msg="Allowed annotations are specified for workload []"
	Jan 27 12:34:42 scheduled-stop-346906 crio[980]: time="2025-01-27 12:34:42.792033095Z" level=info msg="Created container af1f8e81d4915e1c7e4430e290d92955781e74d3b63e2bc70dfb0af14643a001: kube-system/kube-apiserver-scheduled-stop-346906/kube-apiserver" id=7c943f5c-b14f-482a-ac58-e12e926d98e4 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 27 12:34:42 scheduled-stop-346906 crio[980]: time="2025-01-27 12:34:42.792821506Z" level=info msg="Starting container: af1f8e81d4915e1c7e4430e290d92955781e74d3b63e2bc70dfb0af14643a001" id=1da0e2cb-d873-4cb2-81e6-11d42b3db804 name=/runtime.v1.RuntimeService/StartContainer
	Jan 27 12:34:42 scheduled-stop-346906 crio[980]: time="2025-01-27 12:34:42.795351392Z" level=info msg="Created container adcfaa749ba0219feddb1508b510c4c24377620e41fe0d787da59910b9e28bd4: kube-system/kube-scheduler-scheduled-stop-346906/kube-scheduler" id=86ad5a6e-b9c9-49fe-b904-ca4a3a8dfcd1 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 27 12:34:42 scheduled-stop-346906 crio[980]: time="2025-01-27 12:34:42.795890352Z" level=info msg="Starting container: adcfaa749ba0219feddb1508b510c4c24377620e41fe0d787da59910b9e28bd4" id=cf2254b3-9027-4536-8f4c-91ff92fe0ec7 name=/runtime.v1.RuntimeService/StartContainer
	Jan 27 12:34:42 scheduled-stop-346906 crio[980]: time="2025-01-27 12:34:42.802014275Z" level=info msg="Created container 4bcb1c1af42bba9e6fc4677354f824e7395201d8f4fe83937a14045da747c670: kube-system/kube-controller-manager-scheduled-stop-346906/kube-controller-manager" id=ad0745ae-4a2b-4eac-a5ad-86181d79426f name=/runtime.v1.RuntimeService/CreateContainer
	Jan 27 12:34:42 scheduled-stop-346906 crio[980]: time="2025-01-27 12:34:42.802654535Z" level=info msg="Starting container: 4bcb1c1af42bba9e6fc4677354f824e7395201d8f4fe83937a14045da747c670" id=8ac8a743-eb8a-4481-8dca-15fa2206da88 name=/runtime.v1.RuntimeService/StartContainer
	Jan 27 12:34:42 scheduled-stop-346906 crio[980]: time="2025-01-27 12:34:42.812867640Z" level=info msg="Started container" PID=1408 containerID=af1f8e81d4915e1c7e4430e290d92955781e74d3b63e2bc70dfb0af14643a001 description=kube-system/kube-apiserver-scheduled-stop-346906/kube-apiserver id=1da0e2cb-d873-4cb2-81e6-11d42b3db804 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4f6120930ca9ebb55ecdefb04c7c270178ea40aa63f6e45c2e2cb7d743aa5e08
	Jan 27 12:34:42 scheduled-stop-346906 crio[980]: time="2025-01-27 12:34:42.814687686Z" level=info msg="Created container cd13c563a5a70b8840e18ddf5f2791c967123e9c172ff6309d2cc962854e761c: kube-system/etcd-scheduled-stop-346906/etcd" id=157d2eb0-b4c3-4891-a5a6-c9dd98c55f9c name=/runtime.v1.RuntimeService/CreateContainer
	Jan 27 12:34:42 scheduled-stop-346906 crio[980]: time="2025-01-27 12:34:42.815202777Z" level=info msg="Starting container: cd13c563a5a70b8840e18ddf5f2791c967123e9c172ff6309d2cc962854e761c" id=0c9ded60-fe04-4b81-923c-66daf247218f name=/runtime.v1.RuntimeService/StartContainer
	Jan 27 12:34:42 scheduled-stop-346906 crio[980]: time="2025-01-27 12:34:42.824945730Z" level=info msg="Started container" PID=1399 containerID=adcfaa749ba0219feddb1508b510c4c24377620e41fe0d787da59910b9e28bd4 description=kube-system/kube-scheduler-scheduled-stop-346906/kube-scheduler id=cf2254b3-9027-4536-8f4c-91ff92fe0ec7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2287cfbfe6410899ecfc51ca80afb8054dd1a00d4c31f968f171cb05475f6c8b
	Jan 27 12:34:42 scheduled-stop-346906 crio[980]: time="2025-01-27 12:34:42.826793576Z" level=info msg="Started container" PID=1373 containerID=4bcb1c1af42bba9e6fc4677354f824e7395201d8f4fe83937a14045da747c670 description=kube-system/kube-controller-manager-scheduled-stop-346906/kube-controller-manager id=8ac8a743-eb8a-4481-8dca-15fa2206da88 name=/runtime.v1.RuntimeService/StartContainer sandboxID=29d4dda335fff32e9a820b5f4695cef89c59e4378c2b8b9162b2353391dd7d6b
	Jan 27 12:34:42 scheduled-stop-346906 crio[980]: time="2025-01-27 12:34:42.840005889Z" level=info msg="Started container" PID=1410 containerID=cd13c563a5a70b8840e18ddf5f2791c967123e9c172ff6309d2cc962854e761c description=kube-system/etcd-scheduled-stop-346906/etcd id=0c9ded60-fe04-4b81-923c-66daf247218f name=/runtime.v1.RuntimeService/StartContainer sandboxID=396288b2ab839635dc9601bfc2ba4fe2828fba480f28a849da7318244e40d840
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	af1f8e81d4915       265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19   10 seconds ago      Running             kube-apiserver            0                   4f6120930ca9e       kube-apiserver-scheduled-stop-346906
	cd13c563a5a70       7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82   10 seconds ago      Running             etcd                      0                   396288b2ab839       etcd-scheduled-stop-346906
	adcfaa749ba02       ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c   10 seconds ago      Running             kube-scheduler            0                   2287cfbfe6410       kube-scheduler-scheduled-stop-346906
	4bcb1c1af42bb       2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13   10 seconds ago      Running             kube-controller-manager   0                   29d4dda335fff       kube-controller-manager-scheduled-stop-346906
	
	
	==> describe nodes <==
	Name:               scheduled-stop-346906
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=scheduled-stop-346906
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=35c230aa12d4986001aef5f6e29069f3bc5493aa
	                    minikube.k8s.io/name=scheduled-stop-346906
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T12_34_49_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 12:34:46 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  scheduled-stop-346906
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 12:34:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 12:34:49 +0000   Mon, 27 Jan 2025 12:34:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 12:34:49 +0000   Mon, 27 Jan 2025 12:34:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 12:34:49 +0000   Mon, 27 Jan 2025 12:34:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 27 Jan 2025 12:34:49 +0000   Mon, 27 Jan 2025 12:34:43 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    scheduled-stop-346906
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 408cef4e89a741398461aa5cd9904655
	  System UUID:                c0d4066c-194c-47a3-bd99-b38ba949afdc
	  Boot ID:                    dd59411c-5b67-4eb9-9e59-86d920ad153c
	  Kernel Version:             5.15.0-1075-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                             CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                             ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-scheduled-stop-346906                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5s
	  kube-system                 kube-apiserver-scheduled-stop-346906             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5s
	  kube-system                 kube-controller-manager-scheduled-stop-346906    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5s
	  kube-system                 kube-scheduler-scheduled-stop-346906             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%)  0 (0%)
	  memory             100Mi (1%)  0 (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 5s    kubelet          Starting kubelet.
	  Warning  CgroupV1                 5s    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4s    kubelet          Node scheduled-stop-346906 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4s    kubelet          Node scheduled-stop-346906 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4s    kubelet          Node scheduled-stop-346906 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           1s    node-controller  Node scheduled-stop-346906 event: Registered Node scheduled-stop-346906 in Controller
	
	
	==> dmesg <==
	[Jan27 10:45] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [cd13c563a5a70b8840e18ddf5f2791c967123e9c172ff6309d2cc962854e761c] <==
	{"level":"info","ts":"2025-01-27T12:34:42.960896Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-01-27T12:34:42.961236Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-01-27T12:34:42.961385Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-01-27T12:34:42.962242Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-01-27T12:34:42.962325Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-01-27T12:34:43.819119Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-01-27T12:34:43.819244Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-01-27T12:34:43.819311Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-01-27T12:34:43.819352Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-01-27T12:34:43.819385Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-01-27T12:34:43.819436Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-01-27T12:34:43.819470Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-01-27T12:34:43.827147Z","caller":"etcdserver/server.go:2651","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T12:34:43.831225Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:scheduled-stop-346906 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-01-27T12:34:43.831318Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-27T12:34:43.831601Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-27T12:34:43.832253Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-27T12:34:43.833014Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-01-27T12:34:43.835144Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T12:34:43.835255Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T12:34:43.835308Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T12:34:43.835435Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-01-27T12:34:43.835475Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-01-27T12:34:43.835578Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-27T12:34:43.836385Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 12:34:53 up  3:17,  0 users,  load average: 0.60, 0.30, 0.41
	Linux scheduled-stop-346906 5.15.0-1075-aws #82~20.04.1-Ubuntu SMP Thu Dec 19 05:23:06 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [af1f8e81d4915e1c7e4430e290d92955781e74d3b63e2bc70dfb0af14643a001] <==
	I0127 12:34:46.395978       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0127 12:34:46.395995       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0127 12:34:46.396376       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0127 12:34:46.396935       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0127 12:34:46.396962       1 aggregator.go:171] initial CRD sync complete...
	I0127 12:34:46.396969       1 autoregister_controller.go:144] Starting autoregister controller
	I0127 12:34:46.396974       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0127 12:34:46.396979       1 cache.go:39] Caches are synced for autoregister controller
	I0127 12:34:46.400659       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0127 12:34:46.400690       1 policy_source.go:240] refreshing policies
	I0127 12:34:46.407308       1 controller.go:615] quota admission added evaluator for: namespaces
	I0127 12:34:46.424554       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0127 12:34:47.155875       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0127 12:34:47.161290       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0127 12:34:47.161312       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0127 12:34:47.858982       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0127 12:34:47.912185       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0127 12:34:47.993632       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0127 12:34:48.006471       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0127 12:34:48.007978       1 controller.go:615] quota admission added evaluator for: endpoints
	I0127 12:34:48.018108       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0127 12:34:48.326910       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0127 12:34:48.903885       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0127 12:34:48.925294       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0127 12:34:48.936430       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [4bcb1c1af42bba9e6fc4677354f824e7395201d8f4fe83937a14045da747c670] <==
	I0127 12:34:52.946808       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="scheduled-stop-346906"
	I0127 12:34:52.946834       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="scheduled-stop-346906"
	I0127 12:34:52.947886       1 shared_informer.go:320] Caches are synced for deployment
	I0127 12:34:52.899924       1 shared_informer.go:320] Caches are synced for PV protection
	I0127 12:34:52.948490       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0127 12:34:52.948537       1 shared_informer.go:320] Caches are synced for persistent volume
	I0127 12:34:52.948662       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0127 12:34:52.948737       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0127 12:34:52.948780       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0127 12:34:52.949045       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0127 12:34:52.949088       1 shared_informer.go:320] Caches are synced for job
	I0127 12:34:52.899934       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0127 12:34:52.926746       1 shared_informer.go:320] Caches are synced for ephemeral
	I0127 12:34:52.975143       1 shared_informer.go:320] Caches are synced for taint
	I0127 12:34:52.975265       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0127 12:34:52.975342       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="scheduled-stop-346906"
	I0127 12:34:52.975399       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0127 12:34:52.987643       1 shared_informer.go:320] Caches are synced for disruption
	I0127 12:34:52.987742       1 shared_informer.go:320] Caches are synced for resource quota
	I0127 12:34:53.009920       1 shared_informer.go:320] Caches are synced for garbage collector
	I0127 12:34:53.022163       1 shared_informer.go:320] Caches are synced for garbage collector
	I0127 12:34:53.022292       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0127 12:34:53.022325       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0127 12:34:53.025578       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0127 12:34:53.091489       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="scheduled-stop-346906"
	
	
	==> kube-scheduler [adcfaa749ba0219feddb1508b510c4c24377620e41fe0d787da59910b9e28bd4] <==
	W0127 12:34:46.403127       1 reflector.go:569] runtime/asm_arm64.s:1223: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0127 12:34:46.403145       1 reflector.go:166] "Unhandled Error" err="runtime/asm_arm64.s:1223: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0127 12:34:47.210665       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0127 12:34:47.210779       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 12:34:47.213135       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0127 12:34:47.213261       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 12:34:47.275436       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0127 12:34:47.275549       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0127 12:34:47.279234       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0127 12:34:47.279358       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 12:34:47.344180       1 reflector.go:569] runtime/asm_arm64.s:1223: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0127 12:34:47.344228       1 reflector.go:166] "Unhandled Error" err="runtime/asm_arm64.s:1223: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0127 12:34:47.358819       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0127 12:34:47.358957       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 12:34:47.367795       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0127 12:34:47.367939       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 12:34:47.368541       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0127 12:34:47.368664       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 12:34:47.399698       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0127 12:34:47.399831       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0127 12:34:47.466527       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0127 12:34:47.466639       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 12:34:47.477594       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0127 12:34:47.477643       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:34:49.348983       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 27 12:34:49 scheduled-stop-346906 kubelet[1534]: I0127 12:34:49.095922    1534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/d76ae746a7fa92c76b95ca4cf4502a76-etcd-certs\") pod \"etcd-scheduled-stop-346906\" (UID: \"d76ae746a7fa92c76b95ca4cf4502a76\") " pod="kube-system/etcd-scheduled-stop-346906"
	Jan 27 12:34:49 scheduled-stop-346906 kubelet[1534]: I0127 12:34:49.095944    1534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ff846186385f17e0f52fab4471639d4f-usr-share-ca-certificates\") pod \"kube-apiserver-scheduled-stop-346906\" (UID: \"ff846186385f17e0f52fab4471639d4f\") " pod="kube-system/kube-apiserver-scheduled-stop-346906"
	Jan 27 12:34:49 scheduled-stop-346906 kubelet[1534]: I0127 12:34:49.095964    1534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3e20f19078348f3e129d833525bae848-flexvolume-dir\") pod \"kube-controller-manager-scheduled-stop-346906\" (UID: \"3e20f19078348f3e129d833525bae848\") " pod="kube-system/kube-controller-manager-scheduled-stop-346906"
	Jan 27 12:34:49 scheduled-stop-346906 kubelet[1534]: I0127 12:34:49.095984    1534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3e20f19078348f3e129d833525bae848-k8s-certs\") pod \"kube-controller-manager-scheduled-stop-346906\" (UID: \"3e20f19078348f3e129d833525bae848\") " pod="kube-system/kube-controller-manager-scheduled-stop-346906"
	Jan 27 12:34:49 scheduled-stop-346906 kubelet[1534]: I0127 12:34:49.096004    1534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3e20f19078348f3e129d833525bae848-usr-share-ca-certificates\") pod \"kube-controller-manager-scheduled-stop-346906\" (UID: \"3e20f19078348f3e129d833525bae848\") " pod="kube-system/kube-controller-manager-scheduled-stop-346906"
	Jan 27 12:34:49 scheduled-stop-346906 kubelet[1534]: I0127 12:34:49.096029    1534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ff846186385f17e0f52fab4471639d4f-ca-certs\") pod \"kube-apiserver-scheduled-stop-346906\" (UID: \"ff846186385f17e0f52fab4471639d4f\") " pod="kube-system/kube-apiserver-scheduled-stop-346906"
	Jan 27 12:34:49 scheduled-stop-346906 kubelet[1534]: I0127 12:34:49.096051    1534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ff846186385f17e0f52fab4471639d4f-etc-ca-certificates\") pod \"kube-apiserver-scheduled-stop-346906\" (UID: \"ff846186385f17e0f52fab4471639d4f\") " pod="kube-system/kube-apiserver-scheduled-stop-346906"
	Jan 27 12:34:49 scheduled-stop-346906 kubelet[1534]: I0127 12:34:49.096069    1534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ff846186385f17e0f52fab4471639d4f-k8s-certs\") pod \"kube-apiserver-scheduled-stop-346906\" (UID: \"ff846186385f17e0f52fab4471639d4f\") " pod="kube-system/kube-apiserver-scheduled-stop-346906"
	Jan 27 12:34:49 scheduled-stop-346906 kubelet[1534]: I0127 12:34:49.096095    1534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ff846186385f17e0f52fab4471639d4f-usr-local-share-ca-certificates\") pod \"kube-apiserver-scheduled-stop-346906\" (UID: \"ff846186385f17e0f52fab4471639d4f\") " pod="kube-system/kube-apiserver-scheduled-stop-346906"
	Jan 27 12:34:49 scheduled-stop-346906 kubelet[1534]: I0127 12:34:49.096115    1534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3e20f19078348f3e129d833525bae848-etc-ca-certificates\") pod \"kube-controller-manager-scheduled-stop-346906\" (UID: \"3e20f19078348f3e129d833525bae848\") " pod="kube-system/kube-controller-manager-scheduled-stop-346906"
	Jan 27 12:34:49 scheduled-stop-346906 kubelet[1534]: I0127 12:34:49.096138    1534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/d76ae746a7fa92c76b95ca4cf4502a76-etcd-data\") pod \"etcd-scheduled-stop-346906\" (UID: \"d76ae746a7fa92c76b95ca4cf4502a76\") " pod="kube-system/etcd-scheduled-stop-346906"
	Jan 27 12:34:49 scheduled-stop-346906 kubelet[1534]: I0127 12:34:49.096157    1534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3e20f19078348f3e129d833525bae848-kubeconfig\") pod \"kube-controller-manager-scheduled-stop-346906\" (UID: \"3e20f19078348f3e129d833525bae848\") " pod="kube-system/kube-controller-manager-scheduled-stop-346906"
	Jan 27 12:34:49 scheduled-stop-346906 kubelet[1534]: I0127 12:34:49.096596    1534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3e20f19078348f3e129d833525bae848-usr-local-share-ca-certificates\") pod \"kube-controller-manager-scheduled-stop-346906\" (UID: \"3e20f19078348f3e129d833525bae848\") " pod="kube-system/kube-controller-manager-scheduled-stop-346906"
	Jan 27 12:34:49 scheduled-stop-346906 kubelet[1534]: I0127 12:34:49.772472    1534 apiserver.go:52] "Watching apiserver"
	Jan 27 12:34:49 scheduled-stop-346906 kubelet[1534]: I0127 12:34:49.794671    1534 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jan 27 12:34:49 scheduled-stop-346906 kubelet[1534]: I0127 12:34:49.870243    1534 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-scheduled-stop-346906"
	Jan 27 12:34:49 scheduled-stop-346906 kubelet[1534]: I0127 12:34:49.870646    1534 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-scheduled-stop-346906"
	Jan 27 12:34:49 scheduled-stop-346906 kubelet[1534]: E0127 12:34:49.929922    1534 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-scheduled-stop-346906\" already exists" pod="kube-system/kube-scheduler-scheduled-stop-346906"
	Jan 27 12:34:49 scheduled-stop-346906 kubelet[1534]: E0127 12:34:49.931463    1534 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-scheduled-stop-346906\" already exists" pod="kube-system/etcd-scheduled-stop-346906"
	Jan 27 12:34:49 scheduled-stop-346906 kubelet[1534]: I0127 12:34:49.979573    1534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-scheduled-stop-346906" podStartSLOduration=1.979553788 podStartE2EDuration="1.979553788s" podCreationTimestamp="2025-01-27 12:34:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-27 12:34:49.958383958 +0000 UTC m=+1.273565745" watchObservedRunningTime="2025-01-27 12:34:49.979553788 +0000 UTC m=+1.294735558"
	Jan 27 12:34:50 scheduled-stop-346906 kubelet[1534]: I0127 12:34:50.011818    1534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-scheduled-stop-346906" podStartSLOduration=2.01179564 podStartE2EDuration="2.01179564s" podCreationTimestamp="2025-01-27 12:34:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-27 12:34:49.980504677 +0000 UTC m=+1.295686455" watchObservedRunningTime="2025-01-27 12:34:50.01179564 +0000 UTC m=+1.326977410"
	Jan 27 12:34:50 scheduled-stop-346906 kubelet[1534]: I0127 12:34:50.103744    1534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-scheduled-stop-346906" podStartSLOduration=2.103721725 podStartE2EDuration="2.103721725s" podCreationTimestamp="2025-01-27 12:34:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-27 12:34:50.012196664 +0000 UTC m=+1.327378442" watchObservedRunningTime="2025-01-27 12:34:50.103721725 +0000 UTC m=+1.418903511"
	Jan 27 12:34:50 scheduled-stop-346906 kubelet[1534]: I0127 12:34:50.214849    1534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-scheduled-stop-346906" podStartSLOduration=2.214828133 podStartE2EDuration="2.214828133s" podCreationTimestamp="2025-01-27 12:34:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-27 12:34:50.110623023 +0000 UTC m=+1.425804793" watchObservedRunningTime="2025-01-27 12:34:50.214828133 +0000 UTC m=+1.530009903"
	Jan 27 12:34:52 scheduled-stop-346906 kubelet[1534]: I0127 12:34:52.931296    1534 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jan 27 12:34:52 scheduled-stop-346906 kubelet[1534]: I0127 12:34:52.932408    1534 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p scheduled-stop-346906 -n scheduled-stop-346906
helpers_test.go:261: (dbg) Run:  kubectl --context scheduled-stop-346906 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: coredns-668d6bf9bc-bszcr kindnet-z2p7s kube-proxy-tjdwr storage-provisioner
helpers_test.go:274: ======> post-mortem[TestScheduledStopUnix]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context scheduled-stop-346906 describe pod coredns-668d6bf9bc-bszcr kindnet-z2p7s kube-proxy-tjdwr storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context scheduled-stop-346906 describe pod coredns-668d6bf9bc-bszcr kindnet-z2p7s kube-proxy-tjdwr storage-provisioner: exit status 1 (134.55901ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-668d6bf9bc-bszcr" not found
	Error from server (NotFound): pods "kindnet-z2p7s" not found
	Error from server (NotFound): pods "kube-proxy-tjdwr" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context scheduled-stop-346906 describe pod coredns-668d6bf9bc-bszcr kindnet-z2p7s kube-proxy-tjdwr storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "scheduled-stop-346906" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-346906
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-346906: (2.017806373s)
--- FAIL: TestScheduledStopUnix (37.27s)

                                                
                                    

Test pass (296/330)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 6.08
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.24
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.32.1/json-events 4.93
13 TestDownloadOnly/v1.32.1/preload-exists 0
17 TestDownloadOnly/v1.32.1/LogsDuration 0.16
18 TestDownloadOnly/v1.32.1/DeleteAll 0.4
19 TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds 0.24
21 TestBinaryMirror 0.62
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 182.61
31 TestAddons/serial/GCPAuth/Namespaces 0.21
32 TestAddons/serial/GCPAuth/FakeCredentials 10.93
35 TestAddons/parallel/Registry 17.41
37 TestAddons/parallel/InspektorGadget 11.83
38 TestAddons/parallel/MetricsServer 6.87
40 TestAddons/parallel/CSI 56.87
41 TestAddons/parallel/Headlamp 17.84
42 TestAddons/parallel/CloudSpanner 5.63
43 TestAddons/parallel/LocalPath 51.78
44 TestAddons/parallel/NvidiaDevicePlugin 6.7
45 TestAddons/parallel/Yakd 11.98
47 TestAddons/StoppedEnableDisable 12.17
48 TestCertOptions 38.92
49 TestCertExpiration 240.5
51 TestForceSystemdFlag 41.34
52 TestForceSystemdEnv 44.7
58 TestErrorSpam/setup 28.76
59 TestErrorSpam/start 0.79
60 TestErrorSpam/status 1.16
61 TestErrorSpam/pause 1.84
62 TestErrorSpam/unpause 1.84
63 TestErrorSpam/stop 1.56
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 48.22
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 28.99
70 TestFunctional/serial/KubeContext 0.07
71 TestFunctional/serial/KubectlGetPods 0.11
74 TestFunctional/serial/CacheCmd/cache/add_remote 4.47
75 TestFunctional/serial/CacheCmd/cache/add_local 1.48
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
77 TestFunctional/serial/CacheCmd/cache/list 0.07
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.36
79 TestFunctional/serial/CacheCmd/cache/cache_reload 2.16
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.22
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.21
83 TestFunctional/serial/ExtraConfig 34.6
84 TestFunctional/serial/ComponentHealth 0.14
85 TestFunctional/serial/LogsCmd 1.78
86 TestFunctional/serial/LogsFileCmd 1.82
87 TestFunctional/serial/InvalidService 4.63
89 TestFunctional/parallel/ConfigCmd 0.48
90 TestFunctional/parallel/DashboardCmd 17.94
91 TestFunctional/parallel/DryRun 0.47
92 TestFunctional/parallel/InternationalLanguage 0.2
93 TestFunctional/parallel/StatusCmd 1.18
97 TestFunctional/parallel/ServiceCmdConnect 11.7
98 TestFunctional/parallel/AddonsCmd 0.24
99 TestFunctional/parallel/PersistentVolumeClaim 28.03
101 TestFunctional/parallel/SSHCmd 0.82
102 TestFunctional/parallel/CpCmd 1.91
104 TestFunctional/parallel/FileSync 0.28
105 TestFunctional/parallel/CertSync 1.64
109 TestFunctional/parallel/NodeLabels 0.09
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.56
113 TestFunctional/parallel/License 0.21
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.62
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.33
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.1
125 TestFunctional/parallel/ServiceCmd/DeployApp 7.23
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
127 TestFunctional/parallel/ProfileCmd/profile_list 0.46
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.59
129 TestFunctional/parallel/ServiceCmd/List 0.67
130 TestFunctional/parallel/MountCmd/any-port 9.84
131 TestFunctional/parallel/ServiceCmd/JSONOutput 0.66
132 TestFunctional/parallel/ServiceCmd/HTTPS 0.4
133 TestFunctional/parallel/ServiceCmd/Format 0.86
134 TestFunctional/parallel/ServiceCmd/URL 0.56
135 TestFunctional/parallel/MountCmd/specific-port 2
136 TestFunctional/parallel/MountCmd/VerifyCleanup 2.11
137 TestFunctional/parallel/Version/short 0.1
138 TestFunctional/parallel/Version/components 1.37
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.29
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.34
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.33
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.33
143 TestFunctional/parallel/ImageCommands/ImageBuild 3.73
144 TestFunctional/parallel/ImageCommands/Setup 0.75
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.54
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.7
147 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.46
148 TestFunctional/parallel/UpdateContextCmd/no_changes 0.19
149 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.22
150 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.2
151 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.61
152 TestFunctional/parallel/ImageCommands/ImageRemove 0.64
153 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.93
154 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.63
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 179.54
162 TestMultiControlPlane/serial/DeployApp 56.94
163 TestMultiControlPlane/serial/PingHostFromPods 1.71
164 TestMultiControlPlane/serial/AddWorkerNode 33.28
165 TestMultiControlPlane/serial/NodeLabels 0.1
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.01
167 TestMultiControlPlane/serial/CopyFile 19.06
168 TestMultiControlPlane/serial/StopSecondaryNode 12.77
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.75
170 TestMultiControlPlane/serial/RestartSecondaryNode 26.22
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.38
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 193.72
173 TestMultiControlPlane/serial/DeleteSecondaryNode 11.63
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.76
175 TestMultiControlPlane/serial/StopCluster 35.83
176 TestMultiControlPlane/serial/RestartCluster 103.18
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.75
178 TestMultiControlPlane/serial/AddSecondaryNode 73.36
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.98
183 TestJSONOutput/start/Command 51.48
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.73
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.66
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 5.87
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.25
208 TestKicCustomNetwork/create_custom_network 40.92
209 TestKicCustomNetwork/use_default_bridge_network 37.27
210 TestKicExistingNetwork 35.8
211 TestKicCustomSubnet 36.06
212 TestKicStaticIP 33.75
213 TestMainNoArgs 0.05
214 TestMinikubeProfile 65.91
217 TestMountStart/serial/StartWithMountFirst 9.19
218 TestMountStart/serial/VerifyMountFirst 0.27
219 TestMountStart/serial/StartWithMountSecond 7.83
220 TestMountStart/serial/VerifyMountSecond 0.28
221 TestMountStart/serial/DeleteFirst 1.63
222 TestMountStart/serial/VerifyMountPostDelete 0.28
223 TestMountStart/serial/Stop 1.21
224 TestMountStart/serial/RestartStopped 7.88
225 TestMountStart/serial/VerifyMountPostStop 0.26
228 TestMultiNode/serial/FreshStart2Nodes 108.97
229 TestMultiNode/serial/DeployApp2Nodes 6.86
230 TestMultiNode/serial/PingHostFrom2Pods 1
231 TestMultiNode/serial/AddNode 30.12
232 TestMultiNode/serial/MultiNodeLabels 0.1
233 TestMultiNode/serial/ProfileList 0.68
234 TestMultiNode/serial/CopyFile 10.15
235 TestMultiNode/serial/StopNode 2.28
236 TestMultiNode/serial/StartAfterStop 10.17
237 TestMultiNode/serial/RestartKeepsNodes 80.65
238 TestMultiNode/serial/DeleteNode 5.34
239 TestMultiNode/serial/StopMultiNode 23.85
240 TestMultiNode/serial/RestartMultiNode 55.77
241 TestMultiNode/serial/ValidateNameConflict 34.92
251 TestInsufficientStorage 14.12
252 TestRunningBinaryUpgrade 84.91
254 TestKubernetesUpgrade 382.57
255 TestMissingContainerUpgrade 167.89
257 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
258 TestNoKubernetes/serial/StartWithK8s 39.73
259 TestNoKubernetes/serial/StartWithStopK8s 7.06
260 TestNoKubernetes/serial/Start 9.34
261 TestNoKubernetes/serial/VerifyK8sNotRunning 0.32
262 TestNoKubernetes/serial/ProfileList 1.21
263 TestNoKubernetes/serial/Stop 1.24
264 TestNoKubernetes/serial/StartNoArgs 7.76
265 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.33
266 TestStoppedBinaryUpgrade/Setup 0.62
267 TestStoppedBinaryUpgrade/Upgrade 83.14
268 TestStoppedBinaryUpgrade/MinikubeLogs 1.25
277 TestPause/serial/Start 49.57
278 TestPause/serial/SecondStartNoReconfiguration 58.24
279 TestPause/serial/Pause 0.96
280 TestPause/serial/VerifyStatus 0.44
281 TestPause/serial/Unpause 1.02
282 TestPause/serial/PauseAgain 1.22
283 TestPause/serial/DeletePaused 3.76
284 TestPause/serial/VerifyDeletedResources 0.23
292 TestNetworkPlugins/group/false 4.98
297 TestStartStop/group/old-k8s-version/serial/FirstStart 128.74
298 TestStartStop/group/old-k8s-version/serial/DeployApp 11.56
299 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.16
300 TestStartStop/group/old-k8s-version/serial/Stop 11.95
301 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
302 TestStartStop/group/old-k8s-version/serial/SecondStart 140.58
304 TestStartStop/group/no-preload/serial/FirstStart 68.29
305 TestStartStop/group/no-preload/serial/DeployApp 9.39
306 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.2
307 TestStartStop/group/no-preload/serial/Stop 12.08
308 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
309 TestStartStop/group/no-preload/serial/SecondStart 277.01
310 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
311 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.13
312 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
313 TestStartStop/group/old-k8s-version/serial/Pause 3.08
315 TestStartStop/group/embed-certs/serial/FirstStart 80
316 TestStartStop/group/embed-certs/serial/DeployApp 9.35
317 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.17
318 TestStartStop/group/embed-certs/serial/Stop 11.96
319 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
320 TestStartStop/group/embed-certs/serial/SecondStart 266.76
321 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
322 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
323 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.27
324 TestStartStop/group/no-preload/serial/Pause 3.26
326 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 48.93
327 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.39
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.16
329 TestStartStop/group/default-k8s-diff-port/serial/Stop 12
330 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
331 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 303.26
332 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
333 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
334 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
335 TestStartStop/group/embed-certs/serial/Pause 3.64
337 TestStartStop/group/newest-cni/serial/FirstStart 36.61
338 TestStartStop/group/newest-cni/serial/DeployApp 0
339 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.14
340 TestStartStop/group/newest-cni/serial/Stop 1.23
341 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
342 TestStartStop/group/newest-cni/serial/SecondStart 15.59
343 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
344 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
345 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
346 TestStartStop/group/newest-cni/serial/Pause 3.51
347 TestNetworkPlugins/group/auto/Start 77.29
348 TestNetworkPlugins/group/auto/KubeletFlags 0.29
349 TestNetworkPlugins/group/auto/NetCatPod 11.29
350 TestNetworkPlugins/group/auto/DNS 0.19
351 TestNetworkPlugins/group/auto/Localhost 0.16
352 TestNetworkPlugins/group/auto/HairPin 0.17
353 TestNetworkPlugins/group/kindnet/Start 77.94
354 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
355 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
356 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
357 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
358 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.37
359 TestNetworkPlugins/group/kindnet/KubeletFlags 0.5
360 TestNetworkPlugins/group/kindnet/NetCatPod 12.44
361 TestNetworkPlugins/group/calico/Start 75.44
362 TestNetworkPlugins/group/kindnet/DNS 0.19
363 TestNetworkPlugins/group/kindnet/Localhost 0.18
364 TestNetworkPlugins/group/kindnet/HairPin 0.17
365 TestNetworkPlugins/group/custom-flannel/Start 58.58
366 TestNetworkPlugins/group/calico/ControllerPod 6.01
367 TestNetworkPlugins/group/calico/KubeletFlags 0.39
368 TestNetworkPlugins/group/calico/NetCatPod 11.33
369 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
370 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.42
371 TestNetworkPlugins/group/calico/DNS 0.44
372 TestNetworkPlugins/group/calico/Localhost 0.16
373 TestNetworkPlugins/group/calico/HairPin 0.15
374 TestNetworkPlugins/group/custom-flannel/DNS 0.25
375 TestNetworkPlugins/group/custom-flannel/Localhost 0.22
376 TestNetworkPlugins/group/custom-flannel/HairPin 0.26
377 TestNetworkPlugins/group/enable-default-cni/Start 83.04
378 TestNetworkPlugins/group/flannel/Start 61.2
379 TestNetworkPlugins/group/flannel/ControllerPod 6.02
380 TestNetworkPlugins/group/flannel/KubeletFlags 0.32
381 TestNetworkPlugins/group/flannel/NetCatPod 11.33
382 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
383 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.32
384 TestNetworkPlugins/group/flannel/DNS 0.19
385 TestNetworkPlugins/group/flannel/Localhost 0.16
386 TestNetworkPlugins/group/flannel/HairPin 0.15
387 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
388 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
389 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
390 TestNetworkPlugins/group/bridge/Start 77.46
391 TestNetworkPlugins/group/bridge/KubeletFlags 0.28
392 TestNetworkPlugins/group/bridge/NetCatPod 11.28
393 TestNetworkPlugins/group/bridge/DNS 0.19
394 TestNetworkPlugins/group/bridge/Localhost 0.16
395 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.20.0/json-events (6.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-497316 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-497316 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.075374644s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (6.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0127 11:17:47.170278  305936 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0127 11:17:47.170365  305936 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20319-300538/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-497316
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-497316: exit status 85 (96.677336ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-497316 | jenkins | v1.35.0 | 27 Jan 25 11:17 UTC |          |
	|         | -p download-only-497316        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 11:17:41
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 11:17:41.144135  305941 out.go:345] Setting OutFile to fd 1 ...
	I0127 11:17:41.144334  305941 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:17:41.144367  305941 out.go:358] Setting ErrFile to fd 2...
	I0127 11:17:41.144387  305941 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:17:41.144654  305941 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-300538/.minikube/bin
	W0127 11:17:41.144814  305941 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20319-300538/.minikube/config/config.json: open /home/jenkins/minikube-integration/20319-300538/.minikube/config/config.json: no such file or directory
	I0127 11:17:41.145289  305941 out.go:352] Setting JSON to true
	I0127 11:17:41.146242  305941 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7209,"bootTime":1737969453,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0127 11:17:41.146344  305941 start.go:139] virtualization:  
	I0127 11:17:41.150603  305941 out.go:97] [download-only-497316] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	W0127 11:17:41.150819  305941 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20319-300538/.minikube/cache/preloaded-tarball: no such file or directory
	I0127 11:17:41.150927  305941 notify.go:220] Checking for updates...
	I0127 11:17:41.154618  305941 out.go:169] MINIKUBE_LOCATION=20319
	I0127 11:17:41.157948  305941 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 11:17:41.160841  305941 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20319-300538/kubeconfig
	I0127 11:17:41.163910  305941 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-300538/.minikube
	I0127 11:17:41.166832  305941 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0127 11:17:41.172462  305941 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0127 11:17:41.172779  305941 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 11:17:41.200442  305941 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0127 11:17:41.200560  305941 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 11:17:41.259729  305941 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:54 SystemTime:2025-01-27 11:17:41.250034525 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 11:17:41.259847  305941 docker.go:318] overlay module found
	I0127 11:17:41.262845  305941 out.go:97] Using the docker driver based on user configuration
	I0127 11:17:41.262872  305941 start.go:297] selected driver: docker
	I0127 11:17:41.262885  305941 start.go:901] validating driver "docker" against <nil>
	I0127 11:17:41.263004  305941 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 11:17:41.320880  305941 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:54 SystemTime:2025-01-27 11:17:41.303127373 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 11:17:41.321095  305941 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 11:17:41.321394  305941 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0127 11:17:41.321554  305941 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0127 11:17:41.324681  305941 out.go:169] Using Docker driver with root privileges
	I0127 11:17:41.327457  305941 cni.go:84] Creating CNI manager for ""
	I0127 11:17:41.327543  305941 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0127 11:17:41.327561  305941 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0127 11:17:41.327653  305941 start.go:340] cluster config:
	{Name:download-only-497316 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-497316 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:17:41.330686  305941 out.go:97] Starting "download-only-497316" primary control-plane node in "download-only-497316" cluster
	I0127 11:17:41.330706  305941 cache.go:121] Beginning downloading kic base image for docker with crio
	I0127 11:17:41.333553  305941 out.go:97] Pulling base image v0.0.46 ...
	I0127 11:17:41.333583  305941 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 11:17:41.333751  305941 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I0127 11:17:41.349050  305941 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 to local cache
	I0127 11:17:41.349886  305941 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local cache directory
	I0127 11:17:41.350007  305941 image.go:150] Writing gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 to local cache
	I0127 11:17:41.398791  305941 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0127 11:17:41.398817  305941 cache.go:56] Caching tarball of preloaded images
	I0127 11:17:41.399636  305941 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 11:17:41.402922  305941 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0127 11:17:41.402944  305941 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0127 11:17:41.495419  305941 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:59cd2ef07b53f039bfd1761b921f2a02 -> /home/jenkins/minikube-integration/20319-300538/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0127 11:17:45.309818  305941 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0127 11:17:45.309945  305941 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20319-300538/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-497316 host does not exist
	  To start a cluster, run: "minikube start -p download-only-497316"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-497316
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/json-events (4.93s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-227627 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-227627 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.932707811s)
--- PASS: TestDownloadOnly/v1.32.1/json-events (4.93s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/preload-exists
I0127 11:17:52.583282  305936 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
I0127 11:17:52.583322  305936 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20319-300538/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/LogsDuration (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-227627
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-227627: exit status 85 (156.045881ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-497316 | jenkins | v1.35.0 | 27 Jan 25 11:17 UTC |                     |
	|         | -p download-only-497316        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 27 Jan 25 11:17 UTC | 27 Jan 25 11:17 UTC |
	| delete  | -p download-only-497316        | download-only-497316 | jenkins | v1.35.0 | 27 Jan 25 11:17 UTC | 27 Jan 25 11:17 UTC |
	| start   | -o=json --download-only        | download-only-227627 | jenkins | v1.35.0 | 27 Jan 25 11:17 UTC |                     |
	|         | -p download-only-227627        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 11:17:47
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 11:17:47.696554  306142 out.go:345] Setting OutFile to fd 1 ...
	I0127 11:17:47.696766  306142 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:17:47.696793  306142 out.go:358] Setting ErrFile to fd 2...
	I0127 11:17:47.696813  306142 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:17:47.697164  306142 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-300538/.minikube/bin
	I0127 11:17:47.698156  306142 out.go:352] Setting JSON to true
	I0127 11:17:47.699050  306142 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7215,"bootTime":1737969453,"procs":164,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0127 11:17:47.699215  306142 start.go:139] virtualization:  
	I0127 11:17:47.702642  306142 out.go:97] [download-only-227627] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0127 11:17:47.702929  306142 notify.go:220] Checking for updates...
	I0127 11:17:47.705753  306142 out.go:169] MINIKUBE_LOCATION=20319
	I0127 11:17:47.708745  306142 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 11:17:47.711658  306142 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20319-300538/kubeconfig
	I0127 11:17:47.714531  306142 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-300538/.minikube
	I0127 11:17:47.717388  306142 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0127 11:17:47.723087  306142 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0127 11:17:47.723369  306142 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 11:17:47.754122  306142 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0127 11:17:47.754226  306142 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 11:17:47.810220  306142 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-01-27 11:17:47.801301167 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 11:17:47.810328  306142 docker.go:318] overlay module found
	I0127 11:17:47.813317  306142 out.go:97] Using the docker driver based on user configuration
	I0127 11:17:47.813346  306142 start.go:297] selected driver: docker
	I0127 11:17:47.813353  306142 start.go:901] validating driver "docker" against <nil>
	I0127 11:17:47.813474  306142 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 11:17:47.865651  306142 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-01-27 11:17:47.856876386 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 11:17:47.865871  306142 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 11:17:47.866146  306142 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0127 11:17:47.866318  306142 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0127 11:17:47.869359  306142 out.go:169] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-227627 host does not exist
	  To start a cluster, run: "minikube start -p download-only-227627"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.1/LogsDuration (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAll (0.4s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.32.1/DeleteAll (0.40s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-227627
--- PASS: TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.24s)

                                                
                                    
x
+
TestBinaryMirror (0.62s)

                                                
                                                
=== RUN   TestBinaryMirror
I0127 11:17:54.519323  305936 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-166762 --alsologtostderr --binary-mirror http://127.0.0.1:39903 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-166762" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-166762
--- PASS: TestBinaryMirror (0.62s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-334107
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-334107: exit status 85 (83.533271ms)

                                                
                                                
-- stdout --
	* Profile "addons-334107" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-334107"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-334107
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-334107: exit status 85 (82.630743ms)

                                                
                                                
-- stdout --
	* Profile "addons-334107" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-334107"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (182.61s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-334107 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-334107 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m2.614335754s)
--- PASS: TestAddons/Setup (182.61s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.21s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-334107 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-334107 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.21s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.93s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-334107 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-334107 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5647ff5c-5ba9-47e4-b2fb-6521a284e7b0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5647ff5c-5ba9-47e4-b2fb-6521a284e7b0] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.003715411s
addons_test.go:633: (dbg) Run:  kubectl --context addons-334107 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-334107 describe sa gcp-auth-test
addons_test.go:659: (dbg) Run:  kubectl --context addons-334107 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:683: (dbg) Run:  kubectl --context addons-334107 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.93s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.41s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 18.425016ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-w9657" [bca942c5-f096-4159-8437-b4ad70f2524a] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004104471s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-gcxqf" [63d96504-e7b4-42c0-b091-b2fcb073e611] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003546182s
addons_test.go:331: (dbg) Run:  kubectl --context addons-334107 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-334107 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-334107 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.369469107s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-arm64 -p addons-334107 ip
2025/01/27 11:21:34 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-334107 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.41s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.83s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-xgbqk" [39ff38b7-aea2-4048-a8da-15a26614a768] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.016911467s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-334107 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-334107 addons disable inspektor-gadget --alsologtostderr -v=1: (5.815767448s)
--- PASS: TestAddons/parallel/InspektorGadget (11.83s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.87s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 8.784982ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-6x2xs" [4eef1a60-9197-4899-8aff-d30f6c7b06ec] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003469325s
addons_test.go:402: (dbg) Run:  kubectl --context addons-334107 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-334107 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.87s)

                                                
                                    
x
+
TestAddons/parallel/CSI (56.87s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0127 11:21:59.642996  305936 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0127 11:21:59.649462  305936 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0127 11:21:59.649510  305936 kapi.go:107] duration metric: took 9.931563ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 9.949812ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-334107 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-334107 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-334107 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-334107 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-334107 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-334107 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-334107 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-334107 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-334107 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-334107 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-334107 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-334107 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-334107 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-334107 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-334107 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-334107 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [b36dc5d6-9336-4c24-af9f-0a8cd59e0aa2] Pending
helpers_test.go:344: "task-pv-pod" [b36dc5d6-9336-4c24-af9f-0a8cd59e0aa2] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [b36dc5d6-9336-4c24-af9f-0a8cd59e0aa2] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.003604392s
addons_test.go:511: (dbg) Run:  kubectl --context addons-334107 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-334107 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-334107 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-334107 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-334107 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-334107 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-334107 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-334107 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-334107 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-334107 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-334107 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-334107 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-334107 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-334107 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-334107 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-334107 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-334107 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-334107 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-334107 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-334107 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [da7aee75-89ae-4e2d-893c-b9814df2a20b] Pending
helpers_test.go:344: "task-pv-pod-restore" [da7aee75-89ae-4e2d-893c-b9814df2a20b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [da7aee75-89ae-4e2d-893c-b9814df2a20b] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.010996303s
addons_test.go:553: (dbg) Run:  kubectl --context addons-334107 delete pod task-pv-pod-restore
addons_test.go:553: (dbg) Done: kubectl --context addons-334107 delete pod task-pv-pod-restore: (1.17616657s)
addons_test.go:557: (dbg) Run:  kubectl --context addons-334107 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-334107 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-334107 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-334107 addons disable volumesnapshots --alsologtostderr -v=1: (1.261358423s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-334107 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-334107 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.921774718s)
--- PASS: TestAddons/parallel/CSI (56.87s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.84s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-334107 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-69d78d796f-f59kc" [f9f1181e-1a61-4ec9-9f2a-4f2e8ddee0a7] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-69d78d796f-f59kc" [f9f1181e-1a61-4ec9-9f2a-4f2e8ddee0a7] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003638576s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-334107 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-334107 addons disable headlamp --alsologtostderr -v=1: (5.860044644s)
--- PASS: TestAddons/parallel/Headlamp (17.84s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.63s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5d76cffbc-lp94z" [6f00e4af-bdde-4bbb-b541-0b3f305b6032] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004236153s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-334107 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.63s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.78s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-334107 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-334107 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-334107 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-334107 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-334107 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-334107 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-334107 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [6ecd6703-be22-4b92-9961-56a8621987a1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [6ecd6703-be22-4b92-9961-56a8621987a1] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [6ecd6703-be22-4b92-9961-56a8621987a1] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003358074s
addons_test.go:906: (dbg) Run:  kubectl --context addons-334107 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-arm64 -p addons-334107 ssh "cat /opt/local-path-provisioner/pvc-2074cc79-7217-4577-855d-67765c1957bf_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-334107 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-334107 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-334107 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-334107 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.591604426s)
--- PASS: TestAddons/parallel/LocalPath (51.78s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.7s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-nk7kx" [29324f78-b52b-4c83-ae33-af69c72c4c06] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004051512s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-334107 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.70s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.98s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-hcsvc" [8f4b18d2-c601-4771-95b1-09a6de4963c5] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.00379504s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-334107 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-334107 addons disable yakd --alsologtostderr -v=1: (5.976598312s)
--- PASS: TestAddons/parallel/Yakd (11.98s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.17s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-334107
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-334107: (11.879301675s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-334107
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-334107
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-334107
--- PASS: TestAddons/StoppedEnableDisable (12.17s)

                                                
                                    
x
+
TestCertOptions (38.92s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-899363 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-899363 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (36.197683645s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-899363 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-899363 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-899363 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-899363" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-899363
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-899363: (2.026896034s)
--- PASS: TestCertOptions (38.92s)

                                                
                                    
x
+
TestCertExpiration (240.5s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-807977 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
E0127 12:43:29.177332  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/functional-979480/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-807977 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (38.45251511s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-807977 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-807977 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (19.265457337s)
helpers_test.go:175: Cleaning up "cert-expiration-807977" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-807977
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-807977: (2.783860777s)
--- PASS: TestCertExpiration (240.50s)

                                                
                                    
x
+
TestForceSystemdFlag (41.34s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-923200 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-923200 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (38.154945151s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-923200 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-923200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-923200
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-923200: (2.758906676s)
--- PASS: TestForceSystemdFlag (41.34s)

                                                
                                    
x
+
TestForceSystemdEnv (44.7s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-400947 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-400947 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (41.910540909s)
helpers_test.go:175: Cleaning up "force-systemd-env-400947" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-400947
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-400947: (2.788696832s)
--- PASS: TestForceSystemdEnv (44.70s)

                                                
                                    
x
+
TestErrorSpam/setup (28.76s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-876119 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-876119 --driver=docker  --container-runtime=crio
E0127 11:25:58.653625  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:25:58.660312  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:25:58.671611  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:25:58.692940  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:25:58.734296  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:25:58.815661  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:25:58.977105  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:25:59.298708  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:25:59.940257  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:26:01.221578  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:26:03.784321  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-876119 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-876119 --driver=docker  --container-runtime=crio: (28.758813129s)
--- PASS: TestErrorSpam/setup (28.76s)

                                                
                                    
x
+
TestErrorSpam/start (0.79s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-876119 --log_dir /tmp/nospam-876119 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-876119 --log_dir /tmp/nospam-876119 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-876119 --log_dir /tmp/nospam-876119 start --dry-run
--- PASS: TestErrorSpam/start (0.79s)

                                                
                                    
x
+
TestErrorSpam/status (1.16s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-876119 --log_dir /tmp/nospam-876119 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-876119 --log_dir /tmp/nospam-876119 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-876119 --log_dir /tmp/nospam-876119 status
E0127 11:26:08.906592  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestErrorSpam/status (1.16s)

                                                
                                    
x
+
TestErrorSpam/pause (1.84s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-876119 --log_dir /tmp/nospam-876119 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-876119 --log_dir /tmp/nospam-876119 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-876119 --log_dir /tmp/nospam-876119 pause
--- PASS: TestErrorSpam/pause (1.84s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.84s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-876119 --log_dir /tmp/nospam-876119 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-876119 --log_dir /tmp/nospam-876119 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-876119 --log_dir /tmp/nospam-876119 unpause
--- PASS: TestErrorSpam/unpause (1.84s)

                                                
                                    
x
+
TestErrorSpam/stop (1.56s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-876119 --log_dir /tmp/nospam-876119 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-876119 --log_dir /tmp/nospam-876119 stop: (1.336802389s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-876119 --log_dir /tmp/nospam-876119 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-876119 --log_dir /tmp/nospam-876119 stop
--- PASS: TestErrorSpam/stop (1.56s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/20319-300538/.minikube/files/etc/test/nested/copy/305936/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (48.22s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-979480 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0127 11:26:19.147964  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:26:39.629254  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-979480 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (48.218494911s)
--- PASS: TestFunctional/serial/StartWithProxy (48.22s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (28.99s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0127 11:27:07.058993  305936 config.go:182] Loaded profile config "functional-979480": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-979480 --alsologtostderr -v=8
E0127 11:27:20.591207  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-979480 --alsologtostderr -v=8: (28.980126545s)
functional_test.go:663: soft start took 28.985903235s for "functional-979480" cluster.
I0127 11:27:36.046519  305936 config.go:182] Loaded profile config "functional-979480": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/SoftStart (28.99s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-979480 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.47s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-979480 cache add registry.k8s.io/pause:3.1: (1.504434938s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-979480 cache add registry.k8s.io/pause:3.3: (1.508168707s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-979480 cache add registry.k8s.io/pause:latest: (1.460776958s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.47s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-979480 /tmp/TestFunctionalserialCacheCmdcacheadd_local2088113241/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 cache add minikube-local-cache-test:functional-979480
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 cache delete minikube-local-cache-test:functional-979480
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-979480
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.48s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.36s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-979480 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (304.82646ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-979480 cache reload: (1.22916187s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 kubectl -- --context functional-979480 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.22s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-979480 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.21s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.6s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-979480 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-979480 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.597525547s)
functional_test.go:761: restart took 34.59762725s for "functional-979480" cluster.
I0127 11:28:19.988711  305936 config.go:182] Loaded profile config "functional-979480": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/ExtraConfig (34.60s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-979480 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.14s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.78s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-979480 logs: (1.778555835s)
--- PASS: TestFunctional/serial/LogsCmd (1.78s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.82s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 logs --file /tmp/TestFunctionalserialLogsFileCmd4077263331/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-979480 logs --file /tmp/TestFunctionalserialLogsFileCmd4077263331/001/logs.txt: (1.816078406s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.82s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.63s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-979480 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-979480
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-979480: exit status 115 (704.288898ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32086 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-979480 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.63s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-979480 config get cpus: exit status 14 (91.59044ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-979480 config get cpus: exit status 14 (77.048021ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (17.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-979480 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-979480 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 333105: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (17.94s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-979480 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-979480 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (197.716225ms)

                                                
                                                
-- stdout --
	* [functional-979480] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20319
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20319-300538/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-300538/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 11:29:02.764083  332828 out.go:345] Setting OutFile to fd 1 ...
	I0127 11:29:02.764259  332828 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:29:02.764282  332828 out.go:358] Setting ErrFile to fd 2...
	I0127 11:29:02.764302  332828 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:29:02.764685  332828 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-300538/.minikube/bin
	I0127 11:29:02.765260  332828 out.go:352] Setting JSON to false
	I0127 11:29:02.766237  332828 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7890,"bootTime":1737969453,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0127 11:29:02.766745  332828 start.go:139] virtualization:  
	I0127 11:29:02.770151  332828 out.go:177] * [functional-979480] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0127 11:29:02.773136  332828 out.go:177]   - MINIKUBE_LOCATION=20319
	I0127 11:29:02.773193  332828 notify.go:220] Checking for updates...
	I0127 11:29:02.778646  332828 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 11:29:02.781466  332828 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20319-300538/kubeconfig
	I0127 11:29:02.784188  332828 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-300538/.minikube
	I0127 11:29:02.786985  332828 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0127 11:29:02.790154  332828 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 11:29:02.793601  332828 config.go:182] Loaded profile config "functional-979480": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 11:29:02.794203  332828 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 11:29:02.820562  332828 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0127 11:29:02.820683  332828 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 11:29:02.888634  332828 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:54 SystemTime:2025-01-27 11:29:02.877205722 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 11:29:02.888765  332828 docker.go:318] overlay module found
	I0127 11:29:02.892275  332828 out.go:177] * Using the docker driver based on existing profile
	I0127 11:29:02.895315  332828 start.go:297] selected driver: docker
	I0127 11:29:02.895343  332828 start.go:901] validating driver "docker" against &{Name:functional-979480 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-979480 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Moun
tOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:29:02.895460  332828 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 11:29:02.899130  332828 out.go:201] 
	W0127 11:29:02.901986  332828 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0127 11:29:02.905037  332828 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-979480 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-979480 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-979480 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (198.89159ms)

                                                
                                                
-- stdout --
	* [functional-979480] minikube v1.35.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20319
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20319-300538/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-300538/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 11:29:02.578252  332781 out.go:345] Setting OutFile to fd 1 ...
	I0127 11:29:02.578494  332781 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:29:02.578507  332781 out.go:358] Setting ErrFile to fd 2...
	I0127 11:29:02.578513  332781 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:29:02.578967  332781 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-300538/.minikube/bin
	I0127 11:29:02.579504  332781 out.go:352] Setting JSON to false
	I0127 11:29:02.580526  332781 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7890,"bootTime":1737969453,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0127 11:29:02.580620  332781 start.go:139] virtualization:  
	I0127 11:29:02.584202  332781 out.go:177] * [functional-979480] minikube v1.35.0 sur Ubuntu 20.04 (arm64)
	I0127 11:29:02.587041  332781 out.go:177]   - MINIKUBE_LOCATION=20319
	I0127 11:29:02.587109  332781 notify.go:220] Checking for updates...
	I0127 11:29:02.590419  332781 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 11:29:02.593251  332781 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20319-300538/kubeconfig
	I0127 11:29:02.596009  332781 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-300538/.minikube
	I0127 11:29:02.600482  332781 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0127 11:29:02.603312  332781 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 11:29:02.606689  332781 config.go:182] Loaded profile config "functional-979480": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 11:29:02.607281  332781 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 11:29:02.635240  332781 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0127 11:29:02.635375  332781 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 11:29:02.692018  332781 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:54 SystemTime:2025-01-27 11:29:02.68208469 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 11:29:02.692140  332781 docker.go:318] overlay module found
	I0127 11:29:02.695209  332781 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0127 11:29:02.698040  332781 start.go:297] selected driver: docker
	I0127 11:29:02.698057  332781 start.go:901] validating driver "docker" against &{Name:functional-979480 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-979480 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Moun
tOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:29:02.698170  332781 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 11:29:02.701702  332781 out.go:201] 
	W0127 11:29:02.704550  332781 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0127 11:29:02.707318  332781 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-979480 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-979480 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-8449669db6-jglb5" [867883a8-65b7-4b94-b43e-7516f7708eed] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
E0127 11:28:42.512531  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "hello-node-connect-8449669db6-jglb5" [867883a8-65b7-4b94-b43e-7516f7708eed] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.003954658s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31802
functional_test.go:1675: http://192.168.49.2:31802: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-8449669db6-jglb5

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31802
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.70s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (28.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [4d33de02-c79b-4d3a-a743-5c9a638f4017] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.00359546s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-979480 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-979480 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-979480 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-979480 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ba95a291-1628-4115-8738-f343298560f4] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ba95a291-1628-4115-8738-f343298560f4] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.004477539s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-979480 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-979480 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-979480 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8d694eb6-c8ba-4a82-83ce-6d363eb3c7e5] Pending
helpers_test.go:344: "sp-pod" [8d694eb6-c8ba-4a82-83ce-6d363eb3c7e5] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [8d694eb6-c8ba-4a82-83ce-6d363eb3c7e5] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003540908s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-979480 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (28.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 ssh -n functional-979480 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 cp functional-979480:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd988636127/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 ssh -n functional-979480 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 ssh -n functional-979480 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/305936/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 ssh "sudo cat /etc/test/nested/copy/305936/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/305936.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 ssh "sudo cat /etc/ssl/certs/305936.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/305936.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 ssh "sudo cat /usr/share/ca-certificates/305936.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3059362.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 ssh "sudo cat /etc/ssl/certs/3059362.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/3059362.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 ssh "sudo cat /usr/share/ca-certificates/3059362.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-979480 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-979480 ssh "sudo systemctl is-active docker": exit status 1 (263.79975ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-979480 ssh "sudo systemctl is-active containerd": exit status 1 (294.107159ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-979480 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-979480 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-979480 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-979480 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 330620: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-979480 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-979480 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [ea013f80-d970-49c3-912c-ecdf29d1f6a7] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [ea013f80-d970-49c3-912c-ecdf29d1f6a7] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.004645331s
I0127 11:28:38.785518  305936 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.33s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-979480 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.104.109.156 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-979480 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-979480 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-979480 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64fc58db8c-fc74x" [c7f4c19f-8d18-4f01-a13e-0480ab3acadb] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64fc58db8c-fc74x" [c7f4c19f-8d18-4f01-a13e-0480ab3acadb] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.004527251s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "378.471705ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "77.059477ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "498.239299ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "93.463456ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-979480 /tmp/TestFunctionalparallelMountCmdany-port3303326838/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1737977338697127489" to /tmp/TestFunctionalparallelMountCmdany-port3303326838/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1737977338697127489" to /tmp/TestFunctionalparallelMountCmdany-port3303326838/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1737977338697127489" to /tmp/TestFunctionalparallelMountCmdany-port3303326838/001/test-1737977338697127489
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-979480 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (420.109456ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0127 11:28:59.117519  305936 retry.go:31] will retry after 659.315631ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 27 11:28 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 27 11:28 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 27 11:28 test-1737977338697127489
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 ssh cat /mount-9p/test-1737977338697127489
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-979480 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [84b0f1aa-e5c9-4c6b-9e55-e9fa431f978f] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [84b0f1aa-e5c9-4c6b-9e55-e9fa431f978f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [84b0f1aa-e5c9-4c6b-9e55-e9fa431f978f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.004286991s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-979480 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-979480 /tmp/TestFunctionalparallelMountCmdany-port3303326838/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.84s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 service list -o json
functional_test.go:1494: Took "664.593668ms" to run "out/minikube-linux-arm64 -p functional-979480 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:30556
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:30556
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-979480 /tmp/TestFunctionalparallelMountCmdspecific-port2013162343/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-979480 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (356.088603ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0127 11:29:08.894997  305936 retry.go:31] will retry after 604.776045ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-979480 /tmp/TestFunctionalparallelMountCmdspecific-port2013162343/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-979480 ssh "sudo umount -f /mount-9p": exit status 1 (272.205956ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-979480 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-979480 /tmp/TestFunctionalparallelMountCmdspecific-port2013162343/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-979480 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1950049959/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-979480 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1950049959/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-979480 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1950049959/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-979480 ssh "findmnt -T" /mount1: exit status 1 (584.82587ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0127 11:29:11.123049  305936 retry.go:31] will retry after 635.489084ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-979480 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-979480 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1950049959/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-979480 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1950049959/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-979480 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1950049959/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-979480 version -o=json --components: (1.370485908s)
--- PASS: TestFunctional/parallel/Version/components (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-979480 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.1
registry.k8s.io/kube-proxy:v1.32.1
registry.k8s.io/kube-controller-manager:v1.32.1
registry.k8s.io/kube-apiserver:v1.32.1
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-979480
localhost/kicbase/echo-server:functional-979480
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20241108-5c6d2daf
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-979480 image ls --format short --alsologtostderr:
I0127 11:29:27.749602  336046 out.go:345] Setting OutFile to fd 1 ...
I0127 11:29:27.750162  336046 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 11:29:27.750180  336046 out.go:358] Setting ErrFile to fd 2...
I0127 11:29:27.750187  336046 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 11:29:27.750705  336046 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-300538/.minikube/bin
I0127 11:29:27.752814  336046 config.go:182] Loaded profile config "functional-979480": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 11:29:27.753039  336046 config.go:182] Loaded profile config "functional-979480": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 11:29:27.755710  336046 cli_runner.go:164] Run: docker container inspect functional-979480 --format={{.State.Status}}
I0127 11:29:27.786393  336046 ssh_runner.go:195] Run: systemctl --version
I0127 11:29:27.786446  336046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-979480
I0127 11:29:27.817160  336046 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/functional-979480/id_rsa Username:docker}
I0127 11:29:27.903382  336046 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-979480 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| docker.io/library/nginx                 | alpine             | f9d642c42f7bc | 52.3MB |
| registry.k8s.io/kube-apiserver          | v1.32.1            | 265c2dedf28ab | 95MB   |
| registry.k8s.io/kube-scheduler          | v1.32.1            | ddb38cac617cb | 69MB   |
| docker.io/library/nginx                 | latest             | 781d902f1e046 | 201MB  |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| localhost/minikube-local-cache-test     | functional-979480  | c5c0470c995a5 | 3.33kB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | 2f6c962e7b831 | 61.6MB |
| docker.io/kindest/kindnetd              | v20241108-5c6d2daf | 2be0bcf609c65 | 98.3MB |
| localhost/kicbase/echo-server           | functional-979480  | ce2d2cda2d858 | 4.79MB |
| registry.k8s.io/etcd                    | 3.5.16-0           | 7fc9d4aa817aa | 143MB  |
| registry.k8s.io/kube-controller-manager | v1.32.1            | 2933761aa7ada | 88.2MB |
| registry.k8s.io/kube-proxy              | v1.32.1            | e124fbed851d7 | 98.3MB |
| registry.k8s.io/pause                   | 3.10               | afb61768ce381 | 520kB  |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-979480 image ls --format table --alsologtostderr:
I0127 11:29:28.095321  336114 out.go:345] Setting OutFile to fd 1 ...
I0127 11:29:28.095450  336114 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 11:29:28.095462  336114 out.go:358] Setting ErrFile to fd 2...
I0127 11:29:28.095468  336114 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 11:29:28.095813  336114 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-300538/.minikube/bin
I0127 11:29:28.096610  336114 config.go:182] Loaded profile config "functional-979480": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 11:29:28.096744  336114 config.go:182] Loaded profile config "functional-979480": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 11:29:28.097329  336114 cli_runner.go:164] Run: docker container inspect functional-979480 --format={{.State.Status}}
I0127 11:29:28.116268  336114 ssh_runner.go:195] Run: systemctl --version
I0127 11:29:28.116324  336114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-979480
I0127 11:29:28.157214  336114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/functional-979480/id_rsa Username:docker}
I0127 11:29:28.267812  336114 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-979480 image ls --format json --alsologtostderr:
[{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-979480"],"size":"4788229"},{"id":"c5c0470c995a55a7ddb35bde734623e5e8402fb5dfd0aade82e52b2b1cc5d5e4","repoDigests":["localhost/minikube-local-cache-test@sha256:c97c0440e6df0f6105f5734528f9cfbdaaa5427cede29c8308f015a80e172a01"],"repoTags":["localhost/minikube-local-cache-test:functional-979480"],"size":"3330"},{"id":"265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e
49a0d19","repoDigests":["registry.k8s.io/kube-apiserver@sha256:88154e5cc4415415c0cbfb49ad1d63ea2de74614b7b567d5f344c5bcb5c5f244","registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.1"],"size":"94991840"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"2be0bcf609c6573ee83e676c747f31bda661ab2d4e039c51839e38fd258d2903","repoDigests":["docker.io/kindest/kindnetd@sha256:de216f6245e142905c8022d424959a65f798fcd26f5b7492b9c0b0391d735c3e","docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"],"repoTags":["docker.io/kindest/kindnetd:v20241108-5c6d2daf"],"size":"98274354"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kub
ernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"781d902f1e046dcb5aba879a2371b2b6494f97bad89f65a2c7308e78f8087670","repoDigests":["docker.io/library/nginx@sha256:0a399eb16751829e1af26fea27b20c3ec28d7ab1fb72182879dcae1cca21206a","docker.io/library/nginx@sha256:5ad6d1fbf7a41cf81658450236559fd03a80f78e6a5ed21b08e373dec4948712"],"repoTags":["docker.io/library/nginx:latest"],"size":"201125287"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7","registr
y.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"519877"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0","repoDigests":["registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5","registry.k8s.io/kube-proxy@sha256:0d36a8e2f0f6a06753c1ae9949a9a4a58d752f8364fd2ab083fcd836c37f844d"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.1"],"size":"98313623"},{"id":"ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c","repoDigests":["registry.k8s.io/kube-scheduler@sha256:244bf1ea0194bd050a2408
898d65b6a3259624fdd5a3541788b40b4e94c02fc1","registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.1"],"size":"68973892"},{"id":"f9d642c42f7bc79efd0a3aa2b7fe913e0324a23c2f27c6b7f3f112473d47131d","repoDigests":["docker.io/library/nginx@sha256:4338a8ba9b9962d07e30e7ff4bbf27d62ee7523deb7205e8f0912169f1bbac10","docker.io/library/nginx@sha256:814a8e88df978ade80e584cc5b333144b9372a8e3c98872d07137dbf3b44d0e4"],"repoTags":["docker.io/library/nginx:alpine"],"size":"52333544"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:31440a2bef59e2f1ffb600113b557103740ff851e27b0aef5b849f6e3ab994a6","registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61647114"},{"id":"7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82","re
poDigests":["registry.k8s.io/etcd@sha256:313adb872febec3a5740969d5bf1b1df0d222f8fa06675f34db5a7fc437356a1","registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"143226622"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6
268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:71478b03f55b6a17c25fee181fbaaafb7ac4f5314c4007eb0cf3d35fb20938e3","registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.1"],"size":"88241478"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-979480 image ls --format json --alsologtostderr:
I0127 11:29:28.431278  336197 out.go:345] Setting OutFile to fd 1 ...
I0127 11:29:28.431463  336197 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 11:29:28.431476  336197 out.go:358] Setting ErrFile to fd 2...
I0127 11:29:28.431483  336197 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 11:29:28.431747  336197 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-300538/.minikube/bin
I0127 11:29:28.432434  336197 config.go:182] Loaded profile config "functional-979480": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 11:29:28.432555  336197 config.go:182] Loaded profile config "functional-979480": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 11:29:28.433072  336197 cli_runner.go:164] Run: docker container inspect functional-979480 --format={{.State.Status}}
I0127 11:29:28.466846  336197 ssh_runner.go:195] Run: systemctl --version
I0127 11:29:28.466900  336197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-979480
I0127 11:29:28.501055  336197 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/functional-979480/id_rsa Username:docker}
I0127 11:29:28.597800  336197 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-979480 image ls --format yaml --alsologtostderr:
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-979480
size: "4788229"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:31440a2bef59e2f1ffb600113b557103740ff851e27b0aef5b849f6e3ab994a6
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61647114"
- id: ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:244bf1ea0194bd050a2408898d65b6a3259624fdd5a3541788b40b4e94c02fc1
- registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.1
size: "68973892"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5
- registry.k8s.io/kube-proxy@sha256:0d36a8e2f0f6a06753c1ae9949a9a4a58d752f8364fd2ab083fcd836c37f844d
repoTags:
- registry.k8s.io/kube-proxy:v1.32.1
size: "98313623"
- id: 7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82
repoDigests:
- registry.k8s.io/etcd@sha256:313adb872febec3a5740969d5bf1b1df0d222f8fa06675f34db5a7fc437356a1
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "143226622"
- id: 265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:88154e5cc4415415c0cbfb49ad1d63ea2de74614b7b567d5f344c5bcb5c5f244
- registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.1
size: "94991840"
- id: 2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:71478b03f55b6a17c25fee181fbaaafb7ac4f5314c4007eb0cf3d35fb20938e3
- registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.1
size: "88241478"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "519877"
- id: 781d902f1e046dcb5aba879a2371b2b6494f97bad89f65a2c7308e78f8087670
repoDigests:
- docker.io/library/nginx@sha256:0a399eb16751829e1af26fea27b20c3ec28d7ab1fb72182879dcae1cca21206a
- docker.io/library/nginx@sha256:5ad6d1fbf7a41cf81658450236559fd03a80f78e6a5ed21b08e373dec4948712
repoTags:
- docker.io/library/nginx:latest
size: "201125287"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: c5c0470c995a55a7ddb35bde734623e5e8402fb5dfd0aade82e52b2b1cc5d5e4
repoDigests:
- localhost/minikube-local-cache-test@sha256:c97c0440e6df0f6105f5734528f9cfbdaaa5427cede29c8308f015a80e172a01
repoTags:
- localhost/minikube-local-cache-test:functional-979480
size: "3330"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 2be0bcf609c6573ee83e676c747f31bda661ab2d4e039c51839e38fd258d2903
repoDigests:
- docker.io/kindest/kindnetd@sha256:de216f6245e142905c8022d424959a65f798fcd26f5b7492b9c0b0391d735c3e
- docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108
repoTags:
- docker.io/kindest/kindnetd:v20241108-5c6d2daf
size: "98274354"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: f9d642c42f7bc79efd0a3aa2b7fe913e0324a23c2f27c6b7f3f112473d47131d
repoDigests:
- docker.io/library/nginx@sha256:4338a8ba9b9962d07e30e7ff4bbf27d62ee7523deb7205e8f0912169f1bbac10
- docker.io/library/nginx@sha256:814a8e88df978ade80e584cc5b333144b9372a8e3c98872d07137dbf3b44d0e4
repoTags:
- docker.io/library/nginx:alpine
size: "52333544"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-979480 image ls --format yaml --alsologtostderr:
I0127 11:29:27.760199  336045 out.go:345] Setting OutFile to fd 1 ...
I0127 11:29:27.760414  336045 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 11:29:27.760446  336045 out.go:358] Setting ErrFile to fd 2...
I0127 11:29:27.760469  336045 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 11:29:27.760726  336045 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-300538/.minikube/bin
I0127 11:29:27.761623  336045 config.go:182] Loaded profile config "functional-979480": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 11:29:27.761912  336045 config.go:182] Loaded profile config "functional-979480": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 11:29:27.762488  336045 cli_runner.go:164] Run: docker container inspect functional-979480 --format={{.State.Status}}
I0127 11:29:27.789913  336045 ssh_runner.go:195] Run: systemctl --version
I0127 11:29:27.789971  336045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-979480
I0127 11:29:27.819046  336045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/functional-979480/id_rsa Username:docker}
I0127 11:29:27.920470  336045 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-979480 ssh pgrep buildkitd: exit status 1 (384.201152ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 image build -t localhost/my-image:functional-979480 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-979480 image build -t localhost/my-image:functional-979480 testdata/build --alsologtostderr: (3.092747501s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-arm64 -p functional-979480 image build -t localhost/my-image:functional-979480 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 15e294303c1
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-979480
--> 50950a5605a
Successfully tagged localhost/my-image:functional-979480
50950a5605a710a837b00ef6f2c5dbd8ae02987624967ae583babc51b0f41f63
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-979480 image build -t localhost/my-image:functional-979480 testdata/build --alsologtostderr:
I0127 11:29:28.419228  336198 out.go:345] Setting OutFile to fd 1 ...
I0127 11:29:28.419972  336198 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 11:29:28.420011  336198 out.go:358] Setting ErrFile to fd 2...
I0127 11:29:28.420112  336198 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 11:29:28.420422  336198 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-300538/.minikube/bin
I0127 11:29:28.421181  336198 config.go:182] Loaded profile config "functional-979480": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 11:29:28.421909  336198 config.go:182] Loaded profile config "functional-979480": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 11:29:28.422537  336198 cli_runner.go:164] Run: docker container inspect functional-979480 --format={{.State.Status}}
I0127 11:29:28.448388  336198 ssh_runner.go:195] Run: systemctl --version
I0127 11:29:28.448445  336198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-979480
I0127 11:29:28.472829  336198 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/functional-979480/id_rsa Username:docker}
I0127 11:29:28.567696  336198 build_images.go:161] Building image from path: /tmp/build.3848254847.tar
I0127 11:29:28.567774  336198 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0127 11:29:28.577176  336198 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3848254847.tar
I0127 11:29:28.580797  336198 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3848254847.tar: stat -c "%s %y" /var/lib/minikube/build/build.3848254847.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3848254847.tar': No such file or directory
I0127 11:29:28.580827  336198 ssh_runner.go:362] scp /tmp/build.3848254847.tar --> /var/lib/minikube/build/build.3848254847.tar (3072 bytes)
I0127 11:29:28.611212  336198 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3848254847
I0127 11:29:28.622392  336198 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3848254847 -xf /var/lib/minikube/build/build.3848254847.tar
I0127 11:29:28.632436  336198 crio.go:315] Building image: /var/lib/minikube/build/build.3848254847
I0127 11:29:28.632554  336198 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-979480 /var/lib/minikube/build/build.3848254847 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0127 11:29:31.413903  336198 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-979480 /var/lib/minikube/build/build.3848254847 --cgroup-manager=cgroupfs: (2.781309929s)
I0127 11:29:31.413995  336198 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3848254847
I0127 11:29:31.424179  336198 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3848254847.tar
I0127 11:29:31.433719  336198 build_images.go:217] Built localhost/my-image:functional-979480 from /tmp/build.3848254847.tar
I0127 11:29:31.433750  336198 build_images.go:133] succeeded building to: functional-979480
I0127 11:29:31.433756  336198 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-979480
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 image load --daemon kicbase/echo-server:functional-979480 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-979480 image load --daemon kicbase/echo-server:functional-979480 --alsologtostderr: (1.250876127s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 image load --daemon kicbase/echo-server:functional-979480 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p functional-979480 image load --daemon kicbase/echo-server:functional-979480 --alsologtostderr: (2.398905851s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
2025/01/27 11:29:20 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:235: (dbg) Done: docker pull kicbase/echo-server:latest: (5.257059379s)
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-979480
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 image load --daemon kicbase/echo-server:functional-979480 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.46s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 image save kicbase/echo-server:functional-979480 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 image rm kicbase/echo-server:functional-979480 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-979480
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-979480 image save --daemon kicbase/echo-server:functional-979480 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-979480
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.63s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-979480
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-979480
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-979480
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (179.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-125122 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0127 11:30:58.652293  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:31:26.354298  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-125122 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m58.667877887s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (179.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (56.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-125122 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-125122 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-125122 -- rollout status deployment/busybox: (5.949318835s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-125122 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.3 10.244.0.4 10.244.1.2 10.244.2.2'\n\n-- /stdout --"
I0127 11:32:40.561818  305936 retry.go:31] will retry after 1.353573805s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.3 10.244.0.4 10.244.1.2 10.244.2.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-125122 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.3 10.244.0.4 10.244.1.2 10.244.2.2'\n\n-- /stdout --"
I0127 11:32:42.093617  305936 retry.go:31] will retry after 1.837806544s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.3 10.244.0.4 10.244.1.2 10.244.2.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-125122 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.3 10.244.0.4 10.244.1.2 10.244.2.2'\n\n-- /stdout --"
I0127 11:32:44.125056  305936 retry.go:31] will retry after 2.172469609s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.3 10.244.0.4 10.244.1.2 10.244.2.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-125122 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.3 10.244.0.4 10.244.1.2 10.244.2.2'\n\n-- /stdout --"
I0127 11:32:46.487853  305936 retry.go:31] will retry after 1.851185967s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.3 10.244.0.4 10.244.1.2 10.244.2.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-125122 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.3 10.244.0.4 10.244.1.2 10.244.2.2'\n\n-- /stdout --"
I0127 11:32:48.532714  305936 retry.go:31] will retry after 3.855386331s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.3 10.244.0.4 10.244.1.2 10.244.2.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-125122 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.3 10.244.0.4 10.244.1.2 10.244.2.2'\n\n-- /stdout --"
I0127 11:32:52.564131  305936 retry.go:31] will retry after 4.036676005s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.3 10.244.0.4 10.244.1.2 10.244.2.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-125122 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.3 10.244.0.4 10.244.1.2 10.244.2.2'\n\n-- /stdout --"
I0127 11:32:56.793756  305936 retry.go:31] will retry after 10.723609424s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.3 10.244.0.4 10.244.1.2 10.244.2.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-125122 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.3 10.244.0.4 10.244.1.2 10.244.2.2'\n\n-- /stdout --"
I0127 11:33:07.697788  305936 retry.go:31] will retry after 20.514666505s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.3 10.244.0.4 10.244.1.2 10.244.2.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-125122 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-125122 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-125122 -- exec busybox-58667487b6-7c8kd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-125122 -- exec busybox-58667487b6-dn6n5 -- nslookup kubernetes.io
E0127 11:33:29.176962  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/functional-979480/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:33:29.183316  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/functional-979480/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:33:29.194669  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/functional-979480/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:33:29.216009  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/functional-979480/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:33:29.257441  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/functional-979480/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:33:29.339028  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/functional-979480/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-125122 -- exec busybox-58667487b6-nj6tb -- nslookup kubernetes.io
E0127 11:33:29.503859  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/functional-979480/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-125122 -- exec busybox-58667487b6-7c8kd -- nslookup kubernetes.default
E0127 11:33:29.825599  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/functional-979480/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-125122 -- exec busybox-58667487b6-dn6n5 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-125122 -- exec busybox-58667487b6-nj6tb -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-125122 -- exec busybox-58667487b6-7c8kd -- nslookup kubernetes.default.svc.cluster.local
E0127 11:33:30.467385  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/functional-979480/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-125122 -- exec busybox-58667487b6-dn6n5 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-125122 -- exec busybox-58667487b6-nj6tb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (56.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-125122 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-125122 -- exec busybox-58667487b6-7c8kd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-125122 -- exec busybox-58667487b6-7c8kd -- sh -c "ping -c 1 192.168.49.1"
E0127 11:33:31.748782  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/functional-979480/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-125122 -- exec busybox-58667487b6-dn6n5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-125122 -- exec busybox-58667487b6-dn6n5 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-125122 -- exec busybox-58667487b6-nj6tb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-125122 -- exec busybox-58667487b6-nj6tb -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (33.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-125122 -v=7 --alsologtostderr
E0127 11:33:34.310741  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/functional-979480/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:33:39.433240  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/functional-979480/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:33:49.674644  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/functional-979480/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-125122 -v=7 --alsologtostderr: (32.250680943s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-125122 status -v=7 --alsologtostderr: (1.031155468s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (33.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-125122 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.004813025s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 status --output json -v=7 --alsologtostderr
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-125122 status --output json -v=7 --alsologtostderr: (1.00732607s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 cp testdata/cp-test.txt ha-125122:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 ssh -n ha-125122 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 cp ha-125122:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4092524598/001/cp-test_ha-125122.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 ssh -n ha-125122 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 cp ha-125122:/home/docker/cp-test.txt ha-125122-m02:/home/docker/cp-test_ha-125122_ha-125122-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 ssh -n ha-125122 "sudo cat /home/docker/cp-test.txt"
E0127 11:34:10.156965  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/functional-979480/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 ssh -n ha-125122-m02 "sudo cat /home/docker/cp-test_ha-125122_ha-125122-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 cp ha-125122:/home/docker/cp-test.txt ha-125122-m03:/home/docker/cp-test_ha-125122_ha-125122-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 ssh -n ha-125122 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 ssh -n ha-125122-m03 "sudo cat /home/docker/cp-test_ha-125122_ha-125122-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 cp ha-125122:/home/docker/cp-test.txt ha-125122-m04:/home/docker/cp-test_ha-125122_ha-125122-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 ssh -n ha-125122 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 ssh -n ha-125122-m04 "sudo cat /home/docker/cp-test_ha-125122_ha-125122-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 cp testdata/cp-test.txt ha-125122-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 ssh -n ha-125122-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 cp ha-125122-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4092524598/001/cp-test_ha-125122-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 ssh -n ha-125122-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 cp ha-125122-m02:/home/docker/cp-test.txt ha-125122:/home/docker/cp-test_ha-125122-m02_ha-125122.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 ssh -n ha-125122-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 ssh -n ha-125122 "sudo cat /home/docker/cp-test_ha-125122-m02_ha-125122.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 cp ha-125122-m02:/home/docker/cp-test.txt ha-125122-m03:/home/docker/cp-test_ha-125122-m02_ha-125122-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 ssh -n ha-125122-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 ssh -n ha-125122-m03 "sudo cat /home/docker/cp-test_ha-125122-m02_ha-125122-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 cp ha-125122-m02:/home/docker/cp-test.txt ha-125122-m04:/home/docker/cp-test_ha-125122-m02_ha-125122-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 ssh -n ha-125122-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 ssh -n ha-125122-m04 "sudo cat /home/docker/cp-test_ha-125122-m02_ha-125122-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 cp testdata/cp-test.txt ha-125122-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 ssh -n ha-125122-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 cp ha-125122-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4092524598/001/cp-test_ha-125122-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 ssh -n ha-125122-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 cp ha-125122-m03:/home/docker/cp-test.txt ha-125122:/home/docker/cp-test_ha-125122-m03_ha-125122.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 ssh -n ha-125122-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 ssh -n ha-125122 "sudo cat /home/docker/cp-test_ha-125122-m03_ha-125122.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 cp ha-125122-m03:/home/docker/cp-test.txt ha-125122-m02:/home/docker/cp-test_ha-125122-m03_ha-125122-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 ssh -n ha-125122-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 ssh -n ha-125122-m02 "sudo cat /home/docker/cp-test_ha-125122-m03_ha-125122-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 cp ha-125122-m03:/home/docker/cp-test.txt ha-125122-m04:/home/docker/cp-test_ha-125122-m03_ha-125122-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 ssh -n ha-125122-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 ssh -n ha-125122-m04 "sudo cat /home/docker/cp-test_ha-125122-m03_ha-125122-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 cp testdata/cp-test.txt ha-125122-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 ssh -n ha-125122-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 cp ha-125122-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4092524598/001/cp-test_ha-125122-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 ssh -n ha-125122-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 cp ha-125122-m04:/home/docker/cp-test.txt ha-125122:/home/docker/cp-test_ha-125122-m04_ha-125122.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 ssh -n ha-125122-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 ssh -n ha-125122 "sudo cat /home/docker/cp-test_ha-125122-m04_ha-125122.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 cp ha-125122-m04:/home/docker/cp-test.txt ha-125122-m02:/home/docker/cp-test_ha-125122-m04_ha-125122-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 ssh -n ha-125122-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 ssh -n ha-125122-m02 "sudo cat /home/docker/cp-test_ha-125122-m04_ha-125122-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 cp ha-125122-m04:/home/docker/cp-test.txt ha-125122-m03:/home/docker/cp-test_ha-125122-m04_ha-125122-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 ssh -n ha-125122-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 ssh -n ha-125122-m03 "sudo cat /home/docker/cp-test_ha-125122-m04_ha-125122-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-125122 node stop m02 -v=7 --alsologtostderr: (12.004018829s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-125122 status -v=7 --alsologtostderr: exit status 7 (764.658592ms)

                                                
                                                
-- stdout --
	ha-125122
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-125122-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-125122-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-125122-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 11:34:38.353120  352234 out.go:345] Setting OutFile to fd 1 ...
	I0127 11:34:38.353298  352234 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:34:38.353310  352234 out.go:358] Setting ErrFile to fd 2...
	I0127 11:34:38.353314  352234 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:34:38.353579  352234 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-300538/.minikube/bin
	I0127 11:34:38.353757  352234 out.go:352] Setting JSON to false
	I0127 11:34:38.353791  352234 mustload.go:65] Loading cluster: ha-125122
	I0127 11:34:38.353849  352234 notify.go:220] Checking for updates...
	I0127 11:34:38.354240  352234 config.go:182] Loaded profile config "ha-125122": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 11:34:38.354265  352234 status.go:174] checking status of ha-125122 ...
	I0127 11:34:38.354806  352234 cli_runner.go:164] Run: docker container inspect ha-125122 --format={{.State.Status}}
	I0127 11:34:38.375451  352234 status.go:371] ha-125122 host status = "Running" (err=<nil>)
	I0127 11:34:38.375478  352234 host.go:66] Checking if "ha-125122" exists ...
	I0127 11:34:38.375860  352234 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-125122
	I0127 11:34:38.402556  352234 host.go:66] Checking if "ha-125122" exists ...
	I0127 11:34:38.402973  352234 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 11:34:38.403033  352234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-125122
	I0127 11:34:38.432160  352234 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/ha-125122/id_rsa Username:docker}
	I0127 11:34:38.524450  352234 ssh_runner.go:195] Run: systemctl --version
	I0127 11:34:38.529118  352234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 11:34:38.544559  352234 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 11:34:38.600321  352234 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:48 OomKillDisable:true NGoroutines:73 SystemTime:2025-01-27 11:34:38.590473644 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 11:34:38.601852  352234 kubeconfig.go:125] found "ha-125122" server: "https://192.168.49.254:8443"
	I0127 11:34:38.601893  352234 api_server.go:166] Checking apiserver status ...
	I0127 11:34:38.601943  352234 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:34:38.614142  352234 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1472/cgroup
	I0127 11:34:38.623335  352234 api_server.go:182] apiserver freezer: "5:freezer:/docker/5180b9c83ef199f6526b7d3e11dc14c5a3a6bd6c169bdb0b6cc3a4ee46e64869/crio/crio-fffcb25551f360b6778c802d67b44279c8d7b9ebf98f268d2d270f1a706aa003"
	I0127 11:34:38.623429  352234 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/5180b9c83ef199f6526b7d3e11dc14c5a3a6bd6c169bdb0b6cc3a4ee46e64869/crio/crio-fffcb25551f360b6778c802d67b44279c8d7b9ebf98f268d2d270f1a706aa003/freezer.state
	I0127 11:34:38.632144  352234 api_server.go:204] freezer state: "THAWED"
	I0127 11:34:38.632174  352234 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0127 11:34:38.642032  352234 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0127 11:34:38.642063  352234 status.go:463] ha-125122 apiserver status = Running (err=<nil>)
	I0127 11:34:38.642073  352234 status.go:176] ha-125122 status: &{Name:ha-125122 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 11:34:38.642089  352234 status.go:174] checking status of ha-125122-m02 ...
	I0127 11:34:38.642407  352234 cli_runner.go:164] Run: docker container inspect ha-125122-m02 --format={{.State.Status}}
	I0127 11:34:38.660052  352234 status.go:371] ha-125122-m02 host status = "Stopped" (err=<nil>)
	I0127 11:34:38.660074  352234 status.go:384] host is not running, skipping remaining checks
	I0127 11:34:38.660081  352234 status.go:176] ha-125122-m02 status: &{Name:ha-125122-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 11:34:38.660102  352234 status.go:174] checking status of ha-125122-m03 ...
	I0127 11:34:38.660408  352234 cli_runner.go:164] Run: docker container inspect ha-125122-m03 --format={{.State.Status}}
	I0127 11:34:38.682894  352234 status.go:371] ha-125122-m03 host status = "Running" (err=<nil>)
	I0127 11:34:38.682919  352234 host.go:66] Checking if "ha-125122-m03" exists ...
	I0127 11:34:38.686375  352234 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-125122-m03
	I0127 11:34:38.707713  352234 host.go:66] Checking if "ha-125122-m03" exists ...
	I0127 11:34:38.708043  352234 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 11:34:38.708094  352234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-125122-m03
	I0127 11:34:38.726619  352234 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/ha-125122-m03/id_rsa Username:docker}
	I0127 11:34:38.834183  352234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 11:34:38.847415  352234 kubeconfig.go:125] found "ha-125122" server: "https://192.168.49.254:8443"
	I0127 11:34:38.847485  352234 api_server.go:166] Checking apiserver status ...
	I0127 11:34:38.847533  352234 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:34:38.858691  352234 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1339/cgroup
	I0127 11:34:38.868419  352234 api_server.go:182] apiserver freezer: "5:freezer:/docker/ed14ecf2f88b1856ac68e124dc3f1c08b33f8794d06bba816f829705e16d002a/crio/crio-91a5fbcfca2c97c33134728bc9f6c133c0fa663ce9e4a4228a547a0aa5fac594"
	I0127 11:34:38.868497  352234 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/ed14ecf2f88b1856ac68e124dc3f1c08b33f8794d06bba816f829705e16d002a/crio/crio-91a5fbcfca2c97c33134728bc9f6c133c0fa663ce9e4a4228a547a0aa5fac594/freezer.state
	I0127 11:34:38.878042  352234 api_server.go:204] freezer state: "THAWED"
	I0127 11:34:38.878096  352234 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0127 11:34:38.886722  352234 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0127 11:34:38.886748  352234 status.go:463] ha-125122-m03 apiserver status = Running (err=<nil>)
	I0127 11:34:38.886757  352234 status.go:176] ha-125122-m03 status: &{Name:ha-125122-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 11:34:38.886774  352234 status.go:174] checking status of ha-125122-m04 ...
	I0127 11:34:38.887145  352234 cli_runner.go:164] Run: docker container inspect ha-125122-m04 --format={{.State.Status}}
	I0127 11:34:38.904823  352234 status.go:371] ha-125122-m04 host status = "Running" (err=<nil>)
	I0127 11:34:38.904849  352234 host.go:66] Checking if "ha-125122-m04" exists ...
	I0127 11:34:38.905218  352234 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-125122-m04
	I0127 11:34:38.924631  352234 host.go:66] Checking if "ha-125122-m04" exists ...
	I0127 11:34:38.924971  352234 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 11:34:38.925019  352234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-125122-m04
	I0127 11:34:38.944482  352234 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/ha-125122-m04/id_rsa Username:docker}
	I0127 11:34:39.032654  352234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 11:34:39.046432  352234 status.go:176] ha-125122-m04 status: &{Name:ha-125122-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (26.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 node start m02 -v=7 --alsologtostderr
E0127 11:34:51.119109  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/functional-979480/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-125122 node start m02 -v=7 --alsologtostderr: (24.474712138s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-125122 status -v=7 --alsologtostderr: (1.59218521s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (26.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.380174078s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (193.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-125122 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-125122 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 stop -p ha-125122 -v=7 --alsologtostderr: (37.153770015s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 start -p ha-125122 --wait=true -v=7 --alsologtostderr
E0127 11:35:58.652063  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:36:13.043630  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/functional-979480/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 start -p ha-125122 --wait=true -v=7 --alsologtostderr: (2m36.381066822s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-125122
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (193.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 node delete m03 -v=7 --alsologtostderr
E0127 11:38:29.176889  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/functional-979480/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-125122 node delete m03 -v=7 --alsologtostderr: (10.64537818s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 stop -v=7 --alsologtostderr
E0127 11:38:56.891199  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/functional-979480/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-125122 stop -v=7 --alsologtostderr: (35.707254798s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-125122 status -v=7 --alsologtostderr: exit status 7 (125.661841ms)

                                                
                                                
-- stdout --
	ha-125122
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-125122-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-125122-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 11:39:09.272573  366705 out.go:345] Setting OutFile to fd 1 ...
	I0127 11:39:09.272722  366705 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:39:09.272734  366705 out.go:358] Setting ErrFile to fd 2...
	I0127 11:39:09.272740  366705 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:39:09.273000  366705 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-300538/.minikube/bin
	I0127 11:39:09.273192  366705 out.go:352] Setting JSON to false
	I0127 11:39:09.273237  366705 mustload.go:65] Loading cluster: ha-125122
	I0127 11:39:09.273341  366705 notify.go:220] Checking for updates...
	I0127 11:39:09.273726  366705 config.go:182] Loaded profile config "ha-125122": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 11:39:09.273750  366705 status.go:174] checking status of ha-125122 ...
	I0127 11:39:09.274253  366705 cli_runner.go:164] Run: docker container inspect ha-125122 --format={{.State.Status}}
	I0127 11:39:09.294391  366705 status.go:371] ha-125122 host status = "Stopped" (err=<nil>)
	I0127 11:39:09.294417  366705 status.go:384] host is not running, skipping remaining checks
	I0127 11:39:09.294424  366705 status.go:176] ha-125122 status: &{Name:ha-125122 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 11:39:09.294456  366705 status.go:174] checking status of ha-125122-m02 ...
	I0127 11:39:09.294785  366705 cli_runner.go:164] Run: docker container inspect ha-125122-m02 --format={{.State.Status}}
	I0127 11:39:09.318959  366705 status.go:371] ha-125122-m02 host status = "Stopped" (err=<nil>)
	I0127 11:39:09.318986  366705 status.go:384] host is not running, skipping remaining checks
	I0127 11:39:09.318993  366705 status.go:176] ha-125122-m02 status: &{Name:ha-125122-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 11:39:09.319012  366705 status.go:174] checking status of ha-125122-m04 ...
	I0127 11:39:09.319339  366705 cli_runner.go:164] Run: docker container inspect ha-125122-m04 --format={{.State.Status}}
	I0127 11:39:09.340369  366705 status.go:371] ha-125122-m04 host status = "Stopped" (err=<nil>)
	I0127 11:39:09.340394  366705 status.go:384] host is not running, skipping remaining checks
	I0127 11:39:09.340401  366705 status.go:176] ha-125122-m04 status: &{Name:ha-125122-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (103.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 start -p ha-125122 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 start -p ha-125122 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m42.212297397s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (103.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (73.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-125122 --control-plane -v=7 --alsologtostderr
E0127 11:40:58.654232  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 node add -p ha-125122 --control-plane -v=7 --alsologtostderr: (1m12.359971934s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-125122 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (73.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.98s)

                                                
                                    
x
+
TestJSONOutput/start/Command (51.48s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-190659 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0127 11:42:21.715635  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-190659 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (51.478122489s)
--- PASS: TestJSONOutput/start/Command (51.48s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-190659 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-190659 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.87s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-190659 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-190659 --output=json --user=testUser: (5.869726554s)
--- PASS: TestJSONOutput/stop/Command (5.87s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-581805 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-581805 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (95.061134ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4ec6aaea-ccdc-46e8-a2da-f4244a95ae76","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-581805] minikube v1.35.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ee7f9a11-852d-4727-90e4-ceb246c56af0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20319"}}
	{"specversion":"1.0","id":"4a9bdb79-ecac-4229-a5f7-62284ed65f34","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c3a103c6-9f0a-409c-a4fa-c6485db85a26","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20319-300538/kubeconfig"}}
	{"specversion":"1.0","id":"17c944e3-f414-4b2c-bdd2-6c65a76b1531","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-300538/.minikube"}}
	{"specversion":"1.0","id":"29dcfe90-e63d-43ce-bf87-cf248e12220f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"8e4950c5-b194-4995-9b09-05e04088f089","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"065fee06-34a6-47b7-ac8b-38b2d7ca1a8f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-581805" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-581805
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (40.92s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-361857 --network=
E0127 11:43:29.176926  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/functional-979480/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-361857 --network=: (38.842978349s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-361857" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-361857
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-361857: (2.050201627s)
--- PASS: TestKicCustomNetwork/create_custom_network (40.92s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (37.27s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-782381 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-782381 --network=bridge: (35.18583864s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-782381" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-782381
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-782381: (2.05529077s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (37.27s)

                                                
                                    
x
+
TestKicExistingNetwork (35.8s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0127 11:44:37.104427  305936 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0127 11:44:37.120468  305936 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0127 11:44:37.121408  305936 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0127 11:44:37.121447  305936 cli_runner.go:164] Run: docker network inspect existing-network
W0127 11:44:37.136247  305936 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0127 11:44:37.136283  305936 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0127 11:44:37.136298  305936 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0127 11:44:37.136494  305936 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0127 11:44:37.155922  305936 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-83a41a4be89e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:bb:86:ff:d6} reservation:<nil>}
I0127 11:44:37.156344  305936 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001f42f90}
I0127 11:44:37.156372  305936 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0127 11:44:37.156426  305936 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0127 11:44:37.229536  305936 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-561615 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-561615 --network=existing-network: (33.649237894s)
helpers_test.go:175: Cleaning up "existing-network-561615" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-561615
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-561615: (1.990954702s)
I0127 11:45:12.887633  305936 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (35.80s)

                                                
                                    
x
+
TestKicCustomSubnet (36.06s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-395517 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-395517 --subnet=192.168.60.0/24: (33.948810886s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-395517 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-395517" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-395517
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-395517: (2.084770394s)
--- PASS: TestKicCustomSubnet (36.06s)

                                                
                                    
x
+
TestKicStaticIP (33.75s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-133694 --static-ip=192.168.200.200
E0127 11:45:58.659220  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-133694 --static-ip=192.168.200.200: (31.425430689s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-133694 ip
helpers_test.go:175: Cleaning up "static-ip-133694" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-133694
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-133694: (2.147301483s)
--- PASS: TestKicStaticIP (33.75s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (65.91s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-576694 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-576694 --driver=docker  --container-runtime=crio: (30.208263506s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-579820 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-579820 --driver=docker  --container-runtime=crio: (30.299019164s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-576694
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-579820
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-579820" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-579820
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-579820: (2.041975769s)
helpers_test.go:175: Cleaning up "first-576694" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-576694
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-576694: (1.973220596s)
--- PASS: TestMinikubeProfile (65.91s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.19s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-216248 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-216248 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.188328127s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.19s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-216248 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.83s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-218307 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-218307 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.826317889s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.83s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-218307 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-216248 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-216248 --alsologtostderr -v=5: (1.630988601s)
--- PASS: TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-218307 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-218307
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-218307: (1.209186005s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.88s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-218307
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-218307: (6.884226829s)
--- PASS: TestMountStart/serial/RestartStopped (7.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-218307 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (108.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-868030 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0127 11:48:29.176982  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/functional-979480/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-868030 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m48.437804963s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-868030 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (108.97s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-868030 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-868030 -- rollout status deployment/busybox
E0127 11:49:52.253022  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/functional-979480/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-868030 -- rollout status deployment/busybox: (4.882467883s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-868030 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-868030 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-868030 -- exec busybox-58667487b6-76tk2 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-868030 -- exec busybox-58667487b6-ln5b5 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-868030 -- exec busybox-58667487b6-76tk2 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-868030 -- exec busybox-58667487b6-ln5b5 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-868030 -- exec busybox-58667487b6-76tk2 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-868030 -- exec busybox-58667487b6-ln5b5 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.86s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-868030 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-868030 -- exec busybox-58667487b6-76tk2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-868030 -- exec busybox-58667487b6-76tk2 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-868030 -- exec busybox-58667487b6-ln5b5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-868030 -- exec busybox-58667487b6-ln5b5 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.00s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (30.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-868030 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-868030 -v 3 --alsologtostderr: (29.458343488s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-868030 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (30.12s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-868030 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.68s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-868030 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-868030 cp testdata/cp-test.txt multinode-868030:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-868030 ssh -n multinode-868030 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-868030 cp multinode-868030:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile539082254/001/cp-test_multinode-868030.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-868030 ssh -n multinode-868030 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-868030 cp multinode-868030:/home/docker/cp-test.txt multinode-868030-m02:/home/docker/cp-test_multinode-868030_multinode-868030-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-868030 ssh -n multinode-868030 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-868030 ssh -n multinode-868030-m02 "sudo cat /home/docker/cp-test_multinode-868030_multinode-868030-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-868030 cp multinode-868030:/home/docker/cp-test.txt multinode-868030-m03:/home/docker/cp-test_multinode-868030_multinode-868030-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-868030 ssh -n multinode-868030 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-868030 ssh -n multinode-868030-m03 "sudo cat /home/docker/cp-test_multinode-868030_multinode-868030-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-868030 cp testdata/cp-test.txt multinode-868030-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-868030 ssh -n multinode-868030-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-868030 cp multinode-868030-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile539082254/001/cp-test_multinode-868030-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-868030 ssh -n multinode-868030-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-868030 cp multinode-868030-m02:/home/docker/cp-test.txt multinode-868030:/home/docker/cp-test_multinode-868030-m02_multinode-868030.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-868030 ssh -n multinode-868030-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-868030 ssh -n multinode-868030 "sudo cat /home/docker/cp-test_multinode-868030-m02_multinode-868030.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-868030 cp multinode-868030-m02:/home/docker/cp-test.txt multinode-868030-m03:/home/docker/cp-test_multinode-868030-m02_multinode-868030-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-868030 ssh -n multinode-868030-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-868030 ssh -n multinode-868030-m03 "sudo cat /home/docker/cp-test_multinode-868030-m02_multinode-868030-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-868030 cp testdata/cp-test.txt multinode-868030-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-868030 ssh -n multinode-868030-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-868030 cp multinode-868030-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile539082254/001/cp-test_multinode-868030-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-868030 ssh -n multinode-868030-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-868030 cp multinode-868030-m03:/home/docker/cp-test.txt multinode-868030:/home/docker/cp-test_multinode-868030-m03_multinode-868030.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-868030 ssh -n multinode-868030-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-868030 ssh -n multinode-868030 "sudo cat /home/docker/cp-test_multinode-868030-m03_multinode-868030.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-868030 cp multinode-868030-m03:/home/docker/cp-test.txt multinode-868030-m02:/home/docker/cp-test_multinode-868030-m03_multinode-868030-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-868030 ssh -n multinode-868030-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-868030 ssh -n multinode-868030-m02 "sudo cat /home/docker/cp-test_multinode-868030-m03_multinode-868030-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.15s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-868030 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-868030 node stop m03: (1.224564771s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-868030 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-868030 status: exit status 7 (528.309201ms)

                                                
                                                
-- stdout --
	multinode-868030
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-868030-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-868030-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-868030 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-868030 status --alsologtostderr: exit status 7 (523.681732ms)

                                                
                                                
-- stdout --
	multinode-868030
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-868030-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-868030-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 11:50:38.956780  420262 out.go:345] Setting OutFile to fd 1 ...
	I0127 11:50:38.956990  420262 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:50:38.957031  420262 out.go:358] Setting ErrFile to fd 2...
	I0127 11:50:38.957200  420262 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:50:38.957488  420262 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-300538/.minikube/bin
	I0127 11:50:38.957710  420262 out.go:352] Setting JSON to false
	I0127 11:50:38.957780  420262 mustload.go:65] Loading cluster: multinode-868030
	I0127 11:50:38.958293  420262 config.go:182] Loaded profile config "multinode-868030": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 11:50:38.958354  420262 status.go:174] checking status of multinode-868030 ...
	I0127 11:50:38.958935  420262 cli_runner.go:164] Run: docker container inspect multinode-868030 --format={{.State.Status}}
	I0127 11:50:38.957844  420262 notify.go:220] Checking for updates...
	I0127 11:50:38.977798  420262 status.go:371] multinode-868030 host status = "Running" (err=<nil>)
	I0127 11:50:38.977824  420262 host.go:66] Checking if "multinode-868030" exists ...
	I0127 11:50:38.978157  420262 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-868030
	I0127 11:50:39.014169  420262 host.go:66] Checking if "multinode-868030" exists ...
	I0127 11:50:39.014550  420262 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 11:50:39.014634  420262 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-868030
	I0127 11:50:39.034592  420262 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33273 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/multinode-868030/id_rsa Username:docker}
	I0127 11:50:39.120239  420262 ssh_runner.go:195] Run: systemctl --version
	I0127 11:50:39.124263  420262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 11:50:39.136325  420262 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 11:50:39.205724  420262 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:63 SystemTime:2025-01-27 11:50:39.196560717 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 11:50:39.206318  420262 kubeconfig.go:125] found "multinode-868030" server: "https://192.168.67.2:8443"
	I0127 11:50:39.206357  420262 api_server.go:166] Checking apiserver status ...
	I0127 11:50:39.206403  420262 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:50:39.217261  420262 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1410/cgroup
	I0127 11:50:39.226777  420262 api_server.go:182] apiserver freezer: "5:freezer:/docker/c11740101d2317f265c2dac089e1ae8c1a370abbbd3bcdc66587d15b1ba4eadc/crio/crio-24b9a9b134244b13e10d21a7c384108ca37e24fd4badaa3bd9ac8a79cc1c1f68"
	I0127 11:50:39.226856  420262 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/c11740101d2317f265c2dac089e1ae8c1a370abbbd3bcdc66587d15b1ba4eadc/crio/crio-24b9a9b134244b13e10d21a7c384108ca37e24fd4badaa3bd9ac8a79cc1c1f68/freezer.state
	I0127 11:50:39.235437  420262 api_server.go:204] freezer state: "THAWED"
	I0127 11:50:39.235486  420262 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0127 11:50:39.243924  420262 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0127 11:50:39.243950  420262 status.go:463] multinode-868030 apiserver status = Running (err=<nil>)
	I0127 11:50:39.243961  420262 status.go:176] multinode-868030 status: &{Name:multinode-868030 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 11:50:39.243978  420262 status.go:174] checking status of multinode-868030-m02 ...
	I0127 11:50:39.244288  420262 cli_runner.go:164] Run: docker container inspect multinode-868030-m02 --format={{.State.Status}}
	I0127 11:50:39.262117  420262 status.go:371] multinode-868030-m02 host status = "Running" (err=<nil>)
	I0127 11:50:39.262147  420262 host.go:66] Checking if "multinode-868030-m02" exists ...
	I0127 11:50:39.262455  420262 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-868030-m02
	I0127 11:50:39.280341  420262 host.go:66] Checking if "multinode-868030-m02" exists ...
	I0127 11:50:39.280685  420262 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 11:50:39.280739  420262 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-868030-m02
	I0127 11:50:39.299294  420262 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33278 SSHKeyPath:/home/jenkins/minikube-integration/20319-300538/.minikube/machines/multinode-868030-m02/id_rsa Username:docker}
	I0127 11:50:39.389146  420262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 11:50:39.400981  420262 status.go:176] multinode-868030-m02 status: &{Name:multinode-868030-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0127 11:50:39.401069  420262 status.go:174] checking status of multinode-868030-m03 ...
	I0127 11:50:39.401427  420262 cli_runner.go:164] Run: docker container inspect multinode-868030-m03 --format={{.State.Status}}
	I0127 11:50:39.422126  420262 status.go:371] multinode-868030-m03 host status = "Stopped" (err=<nil>)
	I0127 11:50:39.422244  420262 status.go:384] host is not running, skipping remaining checks
	I0127 11:50:39.422259  420262 status.go:176] multinode-868030-m03 status: &{Name:multinode-868030-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.28s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-868030 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-868030 node start m03 -v=7 --alsologtostderr: (9.40841609s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-868030 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.17s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (80.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-868030
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-868030
E0127 11:50:58.656095  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-868030: (24.814771178s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-868030 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-868030 --wait=true -v=8 --alsologtostderr: (55.696180908s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-868030
--- PASS: TestMultiNode/serial/RestartKeepsNodes (80.65s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-868030 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-868030 node delete m03: (4.623743996s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-868030 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.34s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-868030 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-868030 stop: (23.640244464s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-868030 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-868030 status: exit status 7 (99.929245ms)

                                                
                                                
-- stdout --
	multinode-868030
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-868030-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-868030 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-868030 status --alsologtostderr: exit status 7 (106.558184ms)

                                                
                                                
-- stdout --
	multinode-868030
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-868030-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 11:52:39.382785  427648 out.go:345] Setting OutFile to fd 1 ...
	I0127 11:52:39.382906  427648 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:52:39.382917  427648 out.go:358] Setting ErrFile to fd 2...
	I0127 11:52:39.382922  427648 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:52:39.383195  427648 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-300538/.minikube/bin
	I0127 11:52:39.383383  427648 out.go:352] Setting JSON to false
	I0127 11:52:39.383433  427648 mustload.go:65] Loading cluster: multinode-868030
	I0127 11:52:39.383500  427648 notify.go:220] Checking for updates...
	I0127 11:52:39.384759  427648 config.go:182] Loaded profile config "multinode-868030": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 11:52:39.384795  427648 status.go:174] checking status of multinode-868030 ...
	I0127 11:52:39.385512  427648 cli_runner.go:164] Run: docker container inspect multinode-868030 --format={{.State.Status}}
	I0127 11:52:39.404060  427648 status.go:371] multinode-868030 host status = "Stopped" (err=<nil>)
	I0127 11:52:39.404084  427648 status.go:384] host is not running, skipping remaining checks
	I0127 11:52:39.404107  427648 status.go:176] multinode-868030 status: &{Name:multinode-868030 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 11:52:39.404138  427648 status.go:174] checking status of multinode-868030-m02 ...
	I0127 11:52:39.404457  427648 cli_runner.go:164] Run: docker container inspect multinode-868030-m02 --format={{.State.Status}}
	I0127 11:52:39.432193  427648 status.go:371] multinode-868030-m02 host status = "Stopped" (err=<nil>)
	I0127 11:52:39.432220  427648 status.go:384] host is not running, skipping remaining checks
	I0127 11:52:39.432226  427648 status.go:176] multinode-868030-m02 status: &{Name:multinode-868030-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.85s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (55.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-868030 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0127 11:53:29.177239  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/functional-979480/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-868030 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (55.098523674s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-868030 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (55.77s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-868030
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-868030-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-868030-m02 --driver=docker  --container-runtime=crio: exit status 14 (114.089174ms)

                                                
                                                
-- stdout --
	* [multinode-868030-m02] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20319
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20319-300538/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-300538/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-868030-m02' is duplicated with machine name 'multinode-868030-m02' in profile 'multinode-868030'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-868030-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-868030-m03 --driver=docker  --container-runtime=crio: (32.394960994s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-868030
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-868030: exit status 80 (326.732199ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-868030 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-868030-m03 already exists in multinode-868030-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-868030-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-868030-m03: (2.019793626s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.92s)

                                                
                                    
x
+
TestInsufficientStorage (14.12s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-530562 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-530562 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (11.60565766s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f2f5fef0-d14b-4379-b3ac-871f4229ccea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-530562] minikube v1.35.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3eb7681e-dc15-43df-8666-af0c070ded06","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20319"}}
	{"specversion":"1.0","id":"b2b1a287-4879-400f-a9a7-651230908934","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"03691fc5-7691-46b5-96de-a4f110db4322","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20319-300538/kubeconfig"}}
	{"specversion":"1.0","id":"0bfc493e-2162-4a68-b44e-f6ab0d748f7f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-300538/.minikube"}}
	{"specversion":"1.0","id":"85f181c5-b103-4e29-b908-706d12e4d534","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"7721d0e0-d8c8-47ec-bd36-bac38b8e120d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1c54f2ba-f240-42cc-a755-97bf6c892313","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"b996adbe-c1f8-418e-ae6b-bdbd2e012731","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"7cf96ba7-949e-4791-82a9-4a352de5d869","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"11c6814b-bd40-432e-ae4a-53da5be0dfe8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"07921a44-f148-4242-b9ee-8994d0cbde9e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-530562\" primary control-plane node in \"insufficient-storage-530562\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"f00c99c7-d7b8-4a89-a716-5a154a43be2c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.46 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"2ff915e8-691f-4f32-8585-5fd76c1baea9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"f4a6c836-fbcb-4620-aa32-a2a020eaa9ee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-530562 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-530562 --output=json --layout=cluster: exit status 7 (286.583916ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-530562","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-530562","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 12:35:08.093663  443636 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-530562" does not appear in /home/jenkins/minikube-integration/20319-300538/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-530562 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-530562 --output=json --layout=cluster: exit status 7 (286.790685ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-530562","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-530562","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 12:35:08.378606  443699 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-530562" does not appear in /home/jenkins/minikube-integration/20319-300538/kubeconfig
	E0127 12:35:08.390997  443699 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/insufficient-storage-530562/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-530562" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-530562
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-530562: (1.937589725s)
--- PASS: TestInsufficientStorage (14.12s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (84.91s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.431156220 start -p running-upgrade-400758 --memory=2200 --vm-driver=docker  --container-runtime=crio
E0127 12:39:52.259933  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/functional-979480/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.431156220 start -p running-upgrade-400758 --memory=2200 --vm-driver=docker  --container-runtime=crio: (37.748163866s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-400758 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-400758 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (43.639481128s)
helpers_test.go:175: Cleaning up "running-upgrade-400758" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-400758
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-400758: (2.812742274s)
--- PASS: TestRunningBinaryUpgrade (84.91s)

                                                
                                    
x
+
TestKubernetesUpgrade (382.57s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-722233 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-722233 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m9.997783157s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-722233
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-722233: (1.849930247s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-722233 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-722233 status --format={{.Host}}: exit status 7 (101.105861ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-722233 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-722233 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m35.410490911s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-722233 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-722233 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-722233 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (95.850699ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-722233] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20319
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20319-300538/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-300538/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-722233
	    minikube start -p kubernetes-upgrade-722233 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7222332 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.1, by running:
	    
	    minikube start -p kubernetes-upgrade-722233 --kubernetes-version=v1.32.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-722233 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-722233 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (32.376043919s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-722233" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-722233
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-722233: (2.622585261s)
--- PASS: TestKubernetesUpgrade (382.57s)

                                                
                                    
x
+
TestMissingContainerUpgrade (167.89s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3181192323 start -p missing-upgrade-044441 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3181192323 start -p missing-upgrade-044441 --memory=2200 --driver=docker  --container-runtime=crio: (1m28.129825484s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-044441
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-044441: (10.392674328s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-044441
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-044441 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-044441 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m5.753914681s)
helpers_test.go:175: Cleaning up "missing-upgrade-044441" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-044441
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-044441: (2.901649662s)
--- PASS: TestMissingContainerUpgrade (167.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-730051 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-730051 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (98.834678ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-730051] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20319
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20319-300538/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-300538/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (39.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-730051 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-730051 --driver=docker  --container-runtime=crio: (39.371281485s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-730051 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (39.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-730051 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-730051 --no-kubernetes --driver=docker  --container-runtime=crio: (4.616064616s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-730051 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-730051 status -o json: exit status 2 (351.062023ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-730051","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-730051
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-730051: (2.094263892s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-730051 --no-kubernetes --driver=docker  --container-runtime=crio
E0127 12:35:58.652672  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-730051 --no-kubernetes --driver=docker  --container-runtime=crio: (9.337166861s)
--- PASS: TestNoKubernetes/serial/Start (9.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-730051 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-730051 "sudo systemctl is-active --quiet service kubelet": exit status 1 (323.34863ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-730051
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-730051: (1.24082969s)
--- PASS: TestNoKubernetes/serial/Stop (1.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-730051 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-730051 --driver=docker  --container-runtime=crio: (7.763654512s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-730051 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-730051 "sudo systemctl is-active --quiet service kubelet": exit status 1 (330.291217ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.33s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.62s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.62s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (83.14s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1703271961 start -p stopped-upgrade-689593 --memory=2200 --vm-driver=docker  --container-runtime=crio
E0127 12:38:29.176834  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/functional-979480/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1703271961 start -p stopped-upgrade-689593 --memory=2200 --vm-driver=docker  --container-runtime=crio: (31.507125879s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1703271961 -p stopped-upgrade-689593 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1703271961 -p stopped-upgrade-689593 stop: (2.529098541s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-689593 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-689593 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (49.107196019s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (83.14s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.25s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-689593
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-689593: (1.25089817s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.25s)

                                                
                                    
x
+
TestPause/serial/Start (49.57s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-146673 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E0127 12:40:58.652320  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-146673 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (49.56979445s)
--- PASS: TestPause/serial/Start (49.57s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (58.24s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-146673 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-146673 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (58.211442246s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (58.24s)

                                                
                                    
x
+
TestPause/serial/Pause (0.96s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-146673 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.96s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.44s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-146673 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-146673 --output=json --layout=cluster: exit status 2 (439.261375ms)

                                                
                                                
-- stdout --
	{"Name":"pause-146673","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-146673","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.44s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.02s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-146673 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-linux-arm64 unpause -p pause-146673 --alsologtostderr -v=5: (1.022735277s)
--- PASS: TestPause/serial/Unpause (1.02s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.22s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-146673 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-146673 --alsologtostderr -v=5: (1.218937785s)
--- PASS: TestPause/serial/PauseAgain (1.22s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.76s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-146673 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-146673 --alsologtostderr -v=5: (3.762252316s)
--- PASS: TestPause/serial/DeletePaused (3.76s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.23s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-146673
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-146673: exit status 1 (17.561902ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-146673: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-152612 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-152612 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (237.636806ms)

                                                
                                                
-- stdout --
	* [false-152612] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20319
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20319-300538/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-300538/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 12:42:51.413248  482927 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:42:51.413485  482927 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:42:51.413515  482927 out.go:358] Setting ErrFile to fd 2...
	I0127 12:42:51.413535  482927 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:42:51.413822  482927 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20319-300538/.minikube/bin
	I0127 12:42:51.414340  482927 out.go:352] Setting JSON to false
	I0127 12:42:51.415380  482927 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":12319,"bootTime":1737969453,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0127 12:42:51.415490  482927 start.go:139] virtualization:  
	I0127 12:42:51.421185  482927 out.go:177] * [false-152612] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0127 12:42:51.424346  482927 out.go:177]   - MINIKUBE_LOCATION=20319
	I0127 12:42:51.424410  482927 notify.go:220] Checking for updates...
	I0127 12:42:51.430817  482927 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 12:42:51.433733  482927 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20319-300538/kubeconfig
	I0127 12:42:51.436638  482927 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20319-300538/.minikube
	I0127 12:42:51.439470  482927 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0127 12:42:51.442478  482927 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 12:42:51.446021  482927 config.go:182] Loaded profile config "force-systemd-flag-923200": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:42:51.446230  482927 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 12:42:51.483492  482927 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0127 12:42:51.483616  482927 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 12:42:51.560919  482927 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:35 OomKillDisable:true NGoroutines:54 SystemTime:2025-01-27 12:42:51.551654473 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 12:42:51.561033  482927 docker.go:318] overlay module found
	I0127 12:42:51.564190  482927 out.go:177] * Using the docker driver based on user configuration
	I0127 12:42:51.567186  482927 start.go:297] selected driver: docker
	I0127 12:42:51.567207  482927 start.go:901] validating driver "docker" against <nil>
	I0127 12:42:51.567238  482927 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 12:42:51.570686  482927 out.go:201] 
	W0127 12:42:51.573439  482927 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0127 12:42:51.576285  482927 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-152612 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-152612

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-152612

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-152612

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-152612

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-152612

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-152612

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-152612

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-152612

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-152612

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-152612

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-152612"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-152612"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-152612"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-152612

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-152612"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-152612"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-152612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-152612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-152612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-152612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-152612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-152612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-152612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-152612" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-152612"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-152612"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-152612"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-152612"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-152612"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-152612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-152612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-152612" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-152612"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-152612"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-152612"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-152612"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-152612"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-152612

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-152612"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-152612"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-152612"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-152612"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-152612"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-152612"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-152612"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-152612"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-152612"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-152612"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-152612"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-152612"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-152612"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-152612"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-152612"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-152612"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-152612"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-152612"

                                                
                                                
----------------------- debugLogs end: false-152612 [took: 4.526224419s] --------------------------------
helpers_test.go:175: Cleaning up "false-152612" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-152612
--- PASS: TestNetworkPlugins/group/false (4.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (128.74s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-981325 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E0127 12:45:58.652701  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-981325 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m8.743373728s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (128.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-981325 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [eae62a55-33b9-4169-8df8-9069f6a86621] Pending
helpers_test.go:344: "busybox" [eae62a55-33b9-4169-8df8-9069f6a86621] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [eae62a55-33b9-4169-8df8-9069f6a86621] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.003565177s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-981325 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-981325 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-981325 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-981325 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-981325 --alsologtostderr -v=3: (11.947062148s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-981325 -n old-k8s-version-981325
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-981325 -n old-k8s-version-981325: exit status 7 (78.141434ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-981325 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (140.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-981325 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-981325 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m20.127385213s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-981325 -n old-k8s-version-981325
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (140.58s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (68.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-776007 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
E0127 12:48:29.176297  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/functional-979480/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-776007 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (1m8.291704666s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (68.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-776007 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f7640fd8-925d-4353-852b-9a710161a608] Pending
helpers_test.go:344: "busybox" [f7640fd8-925d-4353-852b-9a710161a608] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f7640fd8-925d-4353-852b-9a710161a608] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.007440206s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-776007 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-776007 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-776007 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.087483782s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-776007 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-776007 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-776007 --alsologtostderr -v=3: (12.078625948s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-776007 -n no-preload-776007
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-776007 -n no-preload-776007: exit status 7 (80.987977ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-776007 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (277.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-776007 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
E0127 12:49:01.726676  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-776007 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (4m36.617706111s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-776007 -n no-preload-776007
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (277.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-dwtv8" [0aec9fdc-4dfe-48e1-91bd-0845b60af40b] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004014862s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-dwtv8" [0aec9fdc-4dfe-48e1-91bd-0845b60af40b] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00429259s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-981325 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-981325 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-981325 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-981325 -n old-k8s-version-981325
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-981325 -n old-k8s-version-981325: exit status 2 (346.847141ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-981325 -n old-k8s-version-981325
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-981325 -n old-k8s-version-981325: exit status 2 (318.931784ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-981325 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-981325 -n old-k8s-version-981325
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-981325 -n old-k8s-version-981325
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (80s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-435843 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-435843 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (1m20.002003991s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (80.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-435843 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f6ab4868-e976-4c65-8366-b32e60841eb1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0127 12:50:58.652323  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [f6ab4868-e976-4c65-8366-b32e60841eb1] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004447174s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-435843 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-435843 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-435843 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.075310067s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-435843 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-435843 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-435843 --alsologtostderr -v=3: (11.957012762s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-435843 -n embed-certs-435843
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-435843 -n embed-certs-435843: exit status 7 (71.442346ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-435843 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (266.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-435843 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
E0127 12:51:34.618717  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/old-k8s-version-981325/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:51:34.625042  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/old-k8s-version-981325/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:51:34.636372  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/old-k8s-version-981325/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:51:34.657722  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/old-k8s-version-981325/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:51:34.699184  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/old-k8s-version-981325/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:51:34.780536  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/old-k8s-version-981325/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:51:34.942759  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/old-k8s-version-981325/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:51:35.264210  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/old-k8s-version-981325/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:51:35.906144  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/old-k8s-version-981325/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:51:37.188200  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/old-k8s-version-981325/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:51:39.749760  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/old-k8s-version-981325/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:51:44.871719  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/old-k8s-version-981325/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:51:55.113867  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/old-k8s-version-981325/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:52:15.596041  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/old-k8s-version-981325/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:52:56.558545  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/old-k8s-version-981325/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:53:29.177307  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/functional-979480/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-435843 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (4m26.389965269s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-435843 -n embed-certs-435843
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (266.76s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-xtn95" [ea21cdb0-de54-4876-bdd9-3446693684d8] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003673627s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-xtn95" [ea21cdb0-de54-4876-bdd9-3446693684d8] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003647469s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-776007 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-776007 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-776007 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-776007 -n no-preload-776007
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-776007 -n no-preload-776007: exit status 2 (332.261176ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-776007 -n no-preload-776007
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-776007 -n no-preload-776007: exit status 2 (329.74572ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-776007 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-776007 -n no-preload-776007
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-776007 -n no-preload-776007
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (48.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-147564 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
E0127 12:54:18.480527  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/old-k8s-version-981325/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-147564 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (48.929033807s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (48.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-147564 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [32d7954b-0420-4c13-b297-eedbba2c79d5] Pending
helpers_test.go:344: "busybox" [32d7954b-0420-4c13-b297-eedbba2c79d5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [32d7954b-0420-4c13-b297-eedbba2c79d5] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.00405962s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-147564 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-147564 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-147564 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.028327714s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-147564 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-147564 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-147564 --alsologtostderr -v=3: (12.002488664s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-147564 -n default-k8s-diff-port-147564
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-147564 -n default-k8s-diff-port-147564: exit status 7 (84.919693ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-147564 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (303.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-147564 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-147564 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (5m2.905871656s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-147564 -n default-k8s-diff-port-147564
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (303.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-8wd4q" [3a949734-0ff3-4058-9bc1-ba110a38b83b] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004718855s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-8wd4q" [3a949734-0ff3-4058-9bc1-ba110a38b83b] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004054442s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-435843 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-435843 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-435843 --alsologtostderr -v=1
E0127 12:55:58.652516  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-435843 -n embed-certs-435843
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-435843 -n embed-certs-435843: exit status 2 (319.460301ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-435843 -n embed-certs-435843
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-435843 -n embed-certs-435843: exit status 2 (332.346164ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-435843 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 unpause -p embed-certs-435843 --alsologtostderr -v=1: (1.196798412s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-435843 -n embed-certs-435843
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-435843 -n embed-certs-435843
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (36.61s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-535597 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
E0127 12:56:32.261865  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/functional-979480/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:56:34.619014  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/old-k8s-version-981325/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-535597 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (36.612667497s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (36.61s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-535597 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-535597 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.140720803s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-535597 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-535597 --alsologtostderr -v=3: (1.225129407s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-535597 -n newest-cni-535597
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-535597 -n newest-cni-535597: exit status 7 (72.002111ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-535597 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (15.59s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-535597 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-535597 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (15.163203562s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-535597 -n newest-cni-535597
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (15.59s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-535597 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.51s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-535597 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p newest-cni-535597 --alsologtostderr -v=1: (1.24107463s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-535597 -n newest-cni-535597
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-535597 -n newest-cni-535597: exit status 2 (356.878733ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-535597 -n newest-cni-535597
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-535597 -n newest-cni-535597: exit status 2 (363.895484ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-535597 --alsologtostderr -v=1
E0127 12:57:02.321780  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/old-k8s-version-981325/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-535597 -n newest-cni-535597
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-535597 -n newest-cni-535597
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (77.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-152612 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-152612 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m17.288105938s)
--- PASS: TestNetworkPlugins/group/auto/Start (77.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-152612 "pgrep -a kubelet"
I0127 12:58:23.239860  305936 config.go:182] Loaded profile config "auto-152612": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-152612 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-p7pwv" [d9bd0380-2f38-44fe-af5e-52ee33afc0a4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-p7pwv" [d9bd0380-2f38-44fe-af5e-52ee33afc0a4] Running
E0127 12:58:29.176584  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/functional-979480/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:58:32.062384  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/no-preload-776007/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:58:32.068948  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/no-preload-776007/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:58:32.080387  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/no-preload-776007/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:58:32.101844  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/no-preload-776007/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:58:32.143256  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/no-preload-776007/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:58:32.224724  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/no-preload-776007/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:58:32.386307  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/no-preload-776007/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:58:32.707598  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/no-preload-776007/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:58:33.349745  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/no-preload-776007/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003685282s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-152612 exec deployment/netcat -- nslookup kubernetes.default
E0127 12:58:34.631193  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/no-preload-776007/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-152612 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-152612 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (77.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-152612 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E0127 12:59:13.039210  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/no-preload-776007/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:59:54.001071  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/no-preload-776007/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-152612 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m17.944124292s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (77.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-mpnq8" [986d18d2-847a-42d1-846a-befa914cef38] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00418976s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-mpnq8" [986d18d2-847a-42d1-846a-befa914cef38] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004105052s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-147564 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-2qb4s" [eedee65e-b20b-40f1-a6ef-2927af435295] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005103098s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-147564 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-147564 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-147564 -n default-k8s-diff-port-147564
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-147564 -n default-k8s-diff-port-147564: exit status 2 (330.255725ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-147564 -n default-k8s-diff-port-147564
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-147564 -n default-k8s-diff-port-147564: exit status 2 (324.540436ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-147564 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-147564 -n default-k8s-diff-port-147564
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-147564 -n default-k8s-diff-port-147564
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.37s)
E0127 13:04:38.735287  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/default-k8s-diff-port-147564/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:04:38.741614  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/default-k8s-diff-port-147564/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:04:38.752988  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/default-k8s-diff-port-147564/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:04:38.774356  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/default-k8s-diff-port-147564/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:04:38.815740  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/default-k8s-diff-port-147564/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:04:38.897127  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/default-k8s-diff-port-147564/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:04:39.058675  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/default-k8s-diff-port-147564/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:04:39.381636  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/default-k8s-diff-port-147564/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:04:40.024023  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/default-k8s-diff-port-147564/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:04:41.316902  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/default-k8s-diff-port-147564/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:04:43.879017  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/default-k8s-diff-port-147564/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:04:45.450136  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/auto-152612/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:04:49.000253  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/default-k8s-diff-port-147564/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:04:59.242439  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/default-k8s-diff-port-147564/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:05:13.618972  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/kindnet-152612/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:05:13.625409  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/kindnet-152612/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:05:13.636885  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/kindnet-152612/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:05:13.658436  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/kindnet-152612/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:05:13.699924  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/kindnet-152612/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:05:13.781458  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/kindnet-152612/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:05:13.942950  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/kindnet-152612/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:05:14.264715  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/kindnet-152612/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:05:14.906100  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/kindnet-152612/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:05:16.187841  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/kindnet-152612/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:05:18.750029  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/kindnet-152612/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:05:19.724196  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/default-k8s-diff-port-147564/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:05:23.872183  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/kindnet-152612/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:05:34.113551  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/kindnet-152612/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-152612 "pgrep -a kubelet"
I0127 13:00:20.122586  305936 config.go:182] Loaded profile config "kindnet-152612": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-152612 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-lrb7f" [5f2d654d-891c-4b6f-b359-16dca4a5bad4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-lrb7f" [5f2d654d-891c-4b6f-b359-16dca4a5bad4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.005002714s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (75.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-152612 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-152612 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m15.44170968s)
--- PASS: TestNetworkPlugins/group/calico/Start (75.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-152612 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-152612 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-152612 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (58.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-152612 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E0127 13:00:58.652475  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:01:15.923516  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/no-preload-776007/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:01:34.619107  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/old-k8s-version-981325/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-152612 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (58.579887611s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (58.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-82gtr" [c1c9e8ac-7767-473c-af63-0b0e23e420fd] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006041526s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-152612 "pgrep -a kubelet"
I0127 13:01:45.755857  305936 config.go:182] Loaded profile config "calico-152612": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-152612 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-gt2vh" [d73ecbbe-6dcf-4c4f-a946-5e4bc6acfaf5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-gt2vh" [d73ecbbe-6dcf-4c4f-a946-5e4bc6acfaf5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.006402629s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-152612 "pgrep -a kubelet"
I0127 13:01:56.990295  305936 config.go:182] Loaded profile config "custom-flannel-152612": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-152612 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-zsqjf" [8c90539d-d2d8-457c-bdae-1081f7420f38] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-zsqjf" [8c90539d-d2d8-457c-bdae-1081f7420f38] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.004316283s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-152612 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-152612 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-152612 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-152612 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-152612 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-152612 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (83.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-152612 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-152612 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m23.044725582s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (83.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (61.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-152612 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E0127 13:03:23.507086  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/auto-152612/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:03:23.513500  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/auto-152612/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:03:23.524938  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/auto-152612/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:03:23.546335  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/auto-152612/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:03:23.587778  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/auto-152612/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:03:23.669453  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/auto-152612/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:03:23.831006  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/auto-152612/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:03:24.152755  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/auto-152612/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:03:24.794239  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/auto-152612/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:03:26.076384  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/auto-152612/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:03:28.638170  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/auto-152612/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:03:29.177287  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/functional-979480/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:03:32.061978  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/no-preload-776007/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:03:33.759778  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/auto-152612/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-152612 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m1.203635548s)
--- PASS: TestNetworkPlugins/group/flannel/Start (61.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-ht272" [28c25149-6474-47ef-b75a-3afccd91f50a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.01427244s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-152612 "pgrep -a kubelet"
I0127 13:03:43.590021  305936 config.go:182] Loaded profile config "flannel-152612": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-152612 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-xhfr7" [bda26ffc-a0d9-4a08-a65e-2f19d1a2475b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0127 13:03:44.001519  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/auto-152612/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-xhfr7" [bda26ffc-a0d9-4a08-a65e-2f19d1a2475b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003735334s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-152612 "pgrep -a kubelet"
I0127 13:03:44.690410  305936 config.go:182] Loaded profile config "enable-default-cni-152612": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-152612 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-8r6r2" [6ec1304a-236b-4554-b8ad-7cfd030295d6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-8r6r2" [6ec1304a-236b-4554-b8ad-7cfd030295d6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.005467381s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-152612 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-152612 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-152612 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-152612 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-152612 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-152612 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (77.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-152612 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-152612 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m17.457414718s)
--- PASS: TestNetworkPlugins/group/bridge/Start (77.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-152612 "pgrep -a kubelet"
I0127 13:05:38.869454  305936 config.go:182] Loaded profile config "bridge-152612": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-152612 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-hz48s" [f7e86bc4-af6b-4897-a3fd-74c21f8d65c7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0127 13:05:41.728231  305936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20319-300538/.minikube/profiles/addons-334107/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-hz48s" [f7e86bc4-af6b-4897-a3fd-74c21f8d65c7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.003449037s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-152612 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-152612 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-152612 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (31/330)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.68s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-159827 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-159827" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-159827
--- SKIP: TestDownloadOnlyKic (0.68s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.32s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-334107 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.32s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-081792" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-081792
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-152612 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-152612

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-152612

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-152612

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-152612

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-152612

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-152612

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-152612

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-152612

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-152612

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-152612

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-152612"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-152612"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-152612"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-152612

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-152612"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-152612"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-152612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-152612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-152612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-152612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-152612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-152612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-152612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-152612" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-152612"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-152612"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-152612"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-152612"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-152612"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-152612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-152612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-152612" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-152612"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-152612"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-152612"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-152612"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-152612"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-152612

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-152612"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-152612"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-152612"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-152612"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-152612"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-152612"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-152612"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-152612"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-152612"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-152612"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-152612"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-152612"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-152612"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-152612"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-152612"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-152612"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-152612"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-152612"

                                                
                                                
----------------------- debugLogs end: kubenet-152612 [took: 5.275206621s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-152612" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-152612
--- SKIP: TestNetworkPlugins/group/kubenet (5.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-152612 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-152612

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-152612

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-152612

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-152612

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-152612

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-152612

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-152612

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-152612

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-152612

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-152612

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152612"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152612"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152612"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-152612

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152612"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152612"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-152612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-152612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-152612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-152612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-152612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-152612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-152612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-152612" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152612"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152612"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152612"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152612"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152612"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-152612

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-152612

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-152612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-152612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-152612

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-152612

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-152612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-152612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-152612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-152612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-152612" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152612"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152612"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152612"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152612"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152612"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-152612

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152612"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152612"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152612"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152612"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152612"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152612"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152612"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152612"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152612"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152612"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152612"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152612"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152612"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152612"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152612"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152612"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152612"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-152612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152612"

                                                
                                                
----------------------- debugLogs end: cilium-152612 [took: 5.419640507s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-152612" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-152612
--- SKIP: TestNetworkPlugins/group/cilium (5.62s)

                                                
                                    
Copied to clipboard