Test Report: Docker_Linux 19672

                    
                      d6d2a37830b251a8a712eec07ee86a534797346d:2024-09-20:36297
                    
                

Test fail (2/342)

Order failed test Duration
33 TestAddons/parallel/Registry 72.5
259 TestKubernetesUpgrade 342.86
x
+
TestAddons/parallel/Registry (72.5s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 2.789209ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-2sstq" [67cce838-d446-44f8-90cb-4b7c286fcfcb] Running
I0920 16:56:26.914684   15398 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0920 16:56:26.914709   15398 kapi.go:107] duration metric: took 4.06511ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003069847s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-r58ln" [243fbbcd-f60b-492a-ab03-a7425f4bce3b] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.002754231s
addons_test.go:338: (dbg) Run:  kubectl --context addons-205029 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-205029 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-205029 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.07354132s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-205029 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-linux-amd64 -p addons-205029 ip
2024/09/20 16:57:37 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-amd64 -p addons-205029 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-205029
helpers_test.go:235: (dbg) docker inspect addons-205029:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6ba2b186673de26e77367528e0d08b76dabe76aefa0130fb2dc6c28d726f8bca",
	        "Created": "2024-09-20T16:44:35.267135562Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 17526,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-20T16:44:35.406417505Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:d94335c0cd164ddebb3c5158e317bcf6d2e08dc08f448d25251f425acb842829",
	        "ResolvConfPath": "/var/lib/docker/containers/6ba2b186673de26e77367528e0d08b76dabe76aefa0130fb2dc6c28d726f8bca/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6ba2b186673de26e77367528e0d08b76dabe76aefa0130fb2dc6c28d726f8bca/hostname",
	        "HostsPath": "/var/lib/docker/containers/6ba2b186673de26e77367528e0d08b76dabe76aefa0130fb2dc6c28d726f8bca/hosts",
	        "LogPath": "/var/lib/docker/containers/6ba2b186673de26e77367528e0d08b76dabe76aefa0130fb2dc6c28d726f8bca/6ba2b186673de26e77367528e0d08b76dabe76aefa0130fb2dc6c28d726f8bca-json.log",
	        "Name": "/addons-205029",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-205029:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-205029",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c47ec56723f5a67386e3339dd1fb2d3b54fba3ff16ddd3487543821e6d4873d4-init/diff:/var/lib/docker/overlay2/04d8ee2bca91b716c0fbed8d5cf8682c2b84f5613656c8faad7ce3474f9e857f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c47ec56723f5a67386e3339dd1fb2d3b54fba3ff16ddd3487543821e6d4873d4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c47ec56723f5a67386e3339dd1fb2d3b54fba3ff16ddd3487543821e6d4873d4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c47ec56723f5a67386e3339dd1fb2d3b54fba3ff16ddd3487543821e6d4873d4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-205029",
	                "Source": "/var/lib/docker/volumes/addons-205029/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-205029",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-205029",
	                "name.minikube.sigs.k8s.io": "addons-205029",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "baa298c4d59335be6917fea60d58f068d7ff318b3df17c4ffd8dbc5b5bfcf92e",
	            "SandboxKey": "/var/run/docker/netns/baa298c4d593",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-205029": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "9bda61730e6b3c6514aae8f9b88bc36015ae46024cb4ddff1d942a33513e91cf",
	                    "EndpointID": "9fde9921025335b37e01768dd34b10b097dbc89411267e8b19d37f84bd600ccb",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-205029",
	                        "6ba2b186673d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-205029 -n addons-205029
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-205029 logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | --download-only -p                                                                          | download-docker-226389 | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC |                     |
	|         | download-docker-226389                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p download-docker-226389                                                                   | download-docker-226389 | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC | 20 Sep 24 16:44 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-950195   | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC |                     |
	|         | binary-mirror-950195                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:35633                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-950195                                                                     | binary-mirror-950195   | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC | 20 Sep 24 16:44 UTC |
	| addons  | disable dashboard -p                                                                        | addons-205029          | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC |                     |
	|         | addons-205029                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-205029          | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC |                     |
	|         | addons-205029                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-205029 --wait=true                                                                | addons-205029          | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC | 20 Sep 24 16:47 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | addons-205029 addons disable                                                                | addons-205029          | jenkins | v1.34.0 | 20 Sep 24 16:48 UTC | 20 Sep 24 16:48 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-205029 addons disable                                                                | addons-205029          | jenkins | v1.34.0 | 20 Sep 24 16:56 UTC | 20 Sep 24 16:56 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | addons-205029 addons                                                                        | addons-205029          | jenkins | v1.34.0 | 20 Sep 24 16:56 UTC | 20 Sep 24 16:56 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-205029          | jenkins | v1.34.0 | 20 Sep 24 16:56 UTC | 20 Sep 24 16:56 UTC |
	|         | -p addons-205029                                                                            |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-205029          | jenkins | v1.34.0 | 20 Sep 24 16:56 UTC | 20 Sep 24 16:56 UTC |
	|         | addons-205029                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-205029          | jenkins | v1.34.0 | 20 Sep 24 16:56 UTC | 20 Sep 24 16:56 UTC |
	|         | -p addons-205029                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-205029 ssh cat                                                                       | addons-205029          | jenkins | v1.34.0 | 20 Sep 24 16:56 UTC | 20 Sep 24 16:56 UTC |
	|         | /opt/local-path-provisioner/pvc-d6bd4afe-8bba-4f86-86d7-a230517a8194_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-205029 addons disable                                                                | addons-205029          | jenkins | v1.34.0 | 20 Sep 24 16:56 UTC | 20 Sep 24 16:57 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-205029 addons disable                                                                | addons-205029          | jenkins | v1.34.0 | 20 Sep 24 16:56 UTC | 20 Sep 24 16:57 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-205029          | jenkins | v1.34.0 | 20 Sep 24 16:57 UTC | 20 Sep 24 16:57 UTC |
	|         | addons-205029                                                                               |                        |         |         |                     |                     |
	| addons  | addons-205029 addons                                                                        | addons-205029          | jenkins | v1.34.0 | 20 Sep 24 16:57 UTC | 20 Sep 24 16:57 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-205029 addons                                                                        | addons-205029          | jenkins | v1.34.0 | 20 Sep 24 16:57 UTC | 20 Sep 24 16:57 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-205029 ssh curl -s                                                                   | addons-205029          | jenkins | v1.34.0 | 20 Sep 24 16:57 UTC | 20 Sep 24 16:57 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-205029 ip                                                                            | addons-205029          | jenkins | v1.34.0 | 20 Sep 24 16:57 UTC | 20 Sep 24 16:57 UTC |
	| addons  | addons-205029 addons disable                                                                | addons-205029          | jenkins | v1.34.0 | 20 Sep 24 16:57 UTC | 20 Sep 24 16:57 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-205029 addons disable                                                                | addons-205029          | jenkins | v1.34.0 | 20 Sep 24 16:57 UTC | 20 Sep 24 16:57 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| ip      | addons-205029 ip                                                                            | addons-205029          | jenkins | v1.34.0 | 20 Sep 24 16:57 UTC | 20 Sep 24 16:57 UTC |
	| addons  | addons-205029 addons disable                                                                | addons-205029          | jenkins | v1.34.0 | 20 Sep 24 16:57 UTC | 20 Sep 24 16:57 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 16:44:13
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 16:44:13.479072   16774 out.go:345] Setting OutFile to fd 1 ...
	I0920 16:44:13.479186   16774 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 16:44:13.479194   16774 out.go:358] Setting ErrFile to fd 2...
	I0920 16:44:13.479199   16774 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 16:44:13.479394   16774 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8616/.minikube/bin
	I0920 16:44:13.480001   16774 out.go:352] Setting JSON to false
	I0920 16:44:13.480865   16774 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":1597,"bootTime":1726849056,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 16:44:13.480970   16774 start.go:139] virtualization: kvm guest
	I0920 16:44:13.483255   16774 out.go:177] * [addons-205029] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 16:44:13.484878   16774 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 16:44:13.484899   16774 notify.go:220] Checking for updates...
	I0920 16:44:13.487980   16774 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 16:44:13.489505   16774 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-8616/kubeconfig
	I0920 16:44:13.490982   16774 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8616/.minikube
	I0920 16:44:13.492311   16774 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 16:44:13.493655   16774 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 16:44:13.495342   16774 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 16:44:13.519824   16774 docker.go:123] docker version: linux-27.3.0:Docker Engine - Community
	I0920 16:44:13.519933   16774 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 16:44:13.565776   16774 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-20 16:44:13.55641081 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0920 16:44:13.565884   16774 docker.go:318] overlay module found
	I0920 16:44:13.567781   16774 out.go:177] * Using the docker driver based on user configuration
	I0920 16:44:13.569278   16774 start.go:297] selected driver: docker
	I0920 16:44:13.569297   16774 start.go:901] validating driver "docker" against <nil>
	I0920 16:44:13.569312   16774 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 16:44:13.570093   16774 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 16:44:13.616950   16774 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-20 16:44:13.608060045 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0920 16:44:13.617152   16774 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 16:44:13.617418   16774 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 16:44:13.619145   16774 out.go:177] * Using Docker driver with root privileges
	I0920 16:44:13.620576   16774 cni.go:84] Creating CNI manager for ""
	I0920 16:44:13.620667   16774 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 16:44:13.620683   16774 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 16:44:13.620762   16774 start.go:340] cluster config:
	{Name:addons-205029 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-205029 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 16:44:13.622133   16774 out.go:177] * Starting "addons-205029" primary control-plane node in "addons-205029" cluster
	I0920 16:44:13.623665   16774 cache.go:121] Beginning downloading kic base image for docker with docker
	I0920 16:44:13.625122   16774 out.go:177] * Pulling base image v0.0.45-1726784731-19672 ...
	I0920 16:44:13.626588   16774 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 16:44:13.626636   16774 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-8616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0920 16:44:13.626651   16774 cache.go:56] Caching tarball of preloaded images
	I0920 16:44:13.626702   16774 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0920 16:44:13.626729   16774 preload.go:172] Found /home/jenkins/minikube-integration/19672-8616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0920 16:44:13.626737   16774 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 16:44:13.627073   16774 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/config.json ...
	I0920 16:44:13.627099   16774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/config.json: {Name:mk3df41d227938ff6bc2c2917ae2860a5ae8fb8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:13.642943   16774 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0920 16:44:13.643086   16774 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory
	I0920 16:44:13.643108   16774 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory, skipping pull
	I0920 16:44:13.643112   16774 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed exists in cache, skipping pull
	I0920 16:44:13.643120   16774 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed as a tarball
	I0920 16:44:13.643125   16774 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from local cache
	I0920 16:44:25.835992   16774 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from cached tarball
	I0920 16:44:25.836030   16774 cache.go:194] Successfully downloaded all kic artifacts
	I0920 16:44:25.836077   16774 start.go:360] acquireMachinesLock for addons-205029: {Name:mk9021422c05f4629eb9257457a8fcc06e3f877b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 16:44:25.836172   16774 start.go:364] duration metric: took 76.433µs to acquireMachinesLock for "addons-205029"
	I0920 16:44:25.836194   16774 start.go:93] Provisioning new machine with config: &{Name:addons-205029 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-205029 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 16:44:25.836266   16774 start.go:125] createHost starting for "" (driver="docker")
	I0920 16:44:25.838901   16774 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0920 16:44:25.839156   16774 start.go:159] libmachine.API.Create for "addons-205029" (driver="docker")
	I0920 16:44:25.839191   16774 client.go:168] LocalClient.Create starting
	I0920 16:44:25.839303   16774 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19672-8616/.minikube/certs/ca.pem
	I0920 16:44:26.077196   16774 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19672-8616/.minikube/certs/cert.pem
	I0920 16:44:26.280201   16774 cli_runner.go:164] Run: docker network inspect addons-205029 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0920 16:44:26.296211   16774 cli_runner.go:211] docker network inspect addons-205029 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0920 16:44:26.296295   16774 network_create.go:284] running [docker network inspect addons-205029] to gather additional debugging logs...
	I0920 16:44:26.296319   16774 cli_runner.go:164] Run: docker network inspect addons-205029
	W0920 16:44:26.311340   16774 cli_runner.go:211] docker network inspect addons-205029 returned with exit code 1
	I0920 16:44:26.311371   16774 network_create.go:287] error running [docker network inspect addons-205029]: docker network inspect addons-205029: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-205029 not found
	I0920 16:44:26.311382   16774 network_create.go:289] output of [docker network inspect addons-205029]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-205029 not found
	
	** /stderr **
	I0920 16:44:26.311469   16774 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0920 16:44:26.327244   16774 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b1a780}
	I0920 16:44:26.327288   16774 network_create.go:124] attempt to create docker network addons-205029 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0920 16:44:26.327329   16774 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-205029 addons-205029
	I0920 16:44:26.388062   16774 network_create.go:108] docker network addons-205029 192.168.49.0/24 created
	I0920 16:44:26.388087   16774 kic.go:121] calculated static IP "192.168.49.2" for the "addons-205029" container
	I0920 16:44:26.388154   16774 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0920 16:44:26.404467   16774 cli_runner.go:164] Run: docker volume create addons-205029 --label name.minikube.sigs.k8s.io=addons-205029 --label created_by.minikube.sigs.k8s.io=true
	I0920 16:44:26.421456   16774 oci.go:103] Successfully created a docker volume addons-205029
	I0920 16:44:26.421532   16774 cli_runner.go:164] Run: docker run --rm --name addons-205029-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-205029 --entrypoint /usr/bin/test -v addons-205029:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib
	I0920 16:44:31.241695   16774 cli_runner.go:217] Completed: docker run --rm --name addons-205029-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-205029 --entrypoint /usr/bin/test -v addons-205029:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib: (4.820124605s)
	I0920 16:44:31.241722   16774 oci.go:107] Successfully prepared a docker volume addons-205029
	I0920 16:44:31.241737   16774 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 16:44:31.241757   16774 kic.go:194] Starting extracting preloaded images to volume ...
	I0920 16:44:31.241819   16774 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19672-8616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-205029:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir
	I0920 16:44:35.206249   16774 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19672-8616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-205029:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir: (3.964388864s)
	I0920 16:44:35.206282   16774 kic.go:203] duration metric: took 3.964520827s to extract preloaded images to volume ...
	W0920 16:44:35.206418   16774 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0920 16:44:35.206533   16774 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0920 16:44:35.252143   16774 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-205029 --name addons-205029 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-205029 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-205029 --network addons-205029 --ip 192.168.49.2 --volume addons-205029:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed
	I0920 16:44:35.582140   16774 cli_runner.go:164] Run: docker container inspect addons-205029 --format={{.State.Running}}
	I0920 16:44:35.599543   16774 cli_runner.go:164] Run: docker container inspect addons-205029 --format={{.State.Status}}
	I0920 16:44:35.617933   16774 cli_runner.go:164] Run: docker exec addons-205029 stat /var/lib/dpkg/alternatives/iptables
	I0920 16:44:35.661912   16774 oci.go:144] the created container "addons-205029" has a running status.
	I0920 16:44:35.661938   16774 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19672-8616/.minikube/machines/addons-205029/id_rsa...
	I0920 16:44:35.889519   16774 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19672-8616/.minikube/machines/addons-205029/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0920 16:44:35.913888   16774 cli_runner.go:164] Run: docker container inspect addons-205029 --format={{.State.Status}}
	I0920 16:44:35.929899   16774 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0920 16:44:35.929918   16774 kic_runner.go:114] Args: [docker exec --privileged addons-205029 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0920 16:44:35.978911   16774 cli_runner.go:164] Run: docker container inspect addons-205029 --format={{.State.Status}}
	I0920 16:44:35.995802   16774 machine.go:93] provisionDockerMachine start ...
	I0920 16:44:35.995886   16774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205029
	I0920 16:44:36.013440   16774 main.go:141] libmachine: Using SSH client type: native
	I0920 16:44:36.013632   16774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 16:44:36.013644   16774 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 16:44:36.206524   16774 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-205029
	
	I0920 16:44:36.206551   16774 ubuntu.go:169] provisioning hostname "addons-205029"
	I0920 16:44:36.206605   16774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205029
	I0920 16:44:36.223600   16774 main.go:141] libmachine: Using SSH client type: native
	I0920 16:44:36.223787   16774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 16:44:36.223809   16774 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-205029 && echo "addons-205029" | sudo tee /etc/hostname
	I0920 16:44:36.369133   16774 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-205029
	
	I0920 16:44:36.369207   16774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205029
	I0920 16:44:36.385758   16774 main.go:141] libmachine: Using SSH client type: native
	I0920 16:44:36.385954   16774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 16:44:36.385973   16774 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-205029' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-205029/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-205029' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 16:44:36.514906   16774 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 16:44:36.514931   16774 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19672-8616/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-8616/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-8616/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-8616/.minikube}
	I0920 16:44:36.515001   16774 ubuntu.go:177] setting up certificates
	I0920 16:44:36.515013   16774 provision.go:84] configureAuth start
	I0920 16:44:36.515084   16774 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-205029
	I0920 16:44:36.531544   16774 provision.go:143] copyHostCerts
	I0920 16:44:36.531616   16774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8616/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-8616/.minikube/ca.pem (1082 bytes)
	I0920 16:44:36.531745   16774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8616/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-8616/.minikube/cert.pem (1123 bytes)
	I0920 16:44:36.531812   16774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8616/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-8616/.minikube/key.pem (1679 bytes)
	I0920 16:44:36.531874   16774 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-8616/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-8616/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-8616/.minikube/certs/ca-key.pem org=jenkins.addons-205029 san=[127.0.0.1 192.168.49.2 addons-205029 localhost minikube]
	I0920 16:44:36.667019   16774 provision.go:177] copyRemoteCerts
	I0920 16:44:36.667075   16774 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 16:44:36.667111   16774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205029
	I0920 16:44:36.683532   16774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/addons-205029/id_rsa Username:docker}
	I0920 16:44:36.778950   16774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8616/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 16:44:36.799356   16774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8616/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 16:44:36.819696   16774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8616/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 16:44:36.840152   16774 provision.go:87] duration metric: took 325.125435ms to configureAuth
	I0920 16:44:36.840173   16774 ubuntu.go:193] setting minikube options for container-runtime
	I0920 16:44:36.840311   16774 config.go:182] Loaded profile config "addons-205029": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 16:44:36.840350   16774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205029
	I0920 16:44:36.857247   16774 main.go:141] libmachine: Using SSH client type: native
	I0920 16:44:36.857441   16774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 16:44:36.857456   16774 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0920 16:44:36.983454   16774 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0920 16:44:36.983474   16774 ubuntu.go:71] root file system type: overlay
	I0920 16:44:36.983595   16774 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0920 16:44:36.983650   16774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205029
	I0920 16:44:37.000023   16774 main.go:141] libmachine: Using SSH client type: native
	I0920 16:44:37.000216   16774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 16:44:37.000304   16774 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0920 16:44:37.137225   16774 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0920 16:44:37.137307   16774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205029
	I0920 16:44:37.153537   16774 main.go:141] libmachine: Using SSH client type: native
	I0920 16:44:37.153718   16774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 16:44:37.153735   16774 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0920 16:44:37.856452   16774 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-09-19 14:24:32.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-09-20 16:44:37.134445546 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0920 16:44:37.856489   16774 machine.go:96] duration metric: took 1.860663563s to provisionDockerMachine
	I0920 16:44:37.856501   16774 client.go:171] duration metric: took 12.017302418s to LocalClient.Create
	I0920 16:44:37.856521   16774 start.go:167] duration metric: took 12.01736583s to libmachine.API.Create "addons-205029"
	I0920 16:44:37.856531   16774 start.go:293] postStartSetup for "addons-205029" (driver="docker")
	I0920 16:44:37.856546   16774 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 16:44:37.856612   16774 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 16:44:37.856657   16774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205029
	I0920 16:44:37.872895   16774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/addons-205029/id_rsa Username:docker}
	I0920 16:44:37.963792   16774 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 16:44:37.966776   16774 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0920 16:44:37.966802   16774 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0920 16:44:37.966811   16774 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0920 16:44:37.966821   16774 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0920 16:44:37.966833   16774 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8616/.minikube/addons for local assets ...
	I0920 16:44:37.966893   16774 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8616/.minikube/files for local assets ...
	I0920 16:44:37.966917   16774 start.go:296] duration metric: took 110.378581ms for postStartSetup
	I0920 16:44:37.967194   16774 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-205029
	I0920 16:44:37.983269   16774 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/config.json ...
	I0920 16:44:37.983512   16774 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 16:44:37.983548   16774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205029
	I0920 16:44:38.000043   16774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/addons-205029/id_rsa Username:docker}
	I0920 16:44:38.087578   16774 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0920 16:44:38.091688   16774 start.go:128] duration metric: took 12.255409328s to createHost
	I0920 16:44:38.091708   16774 start.go:83] releasing machines lock for "addons-205029", held for 12.255526508s
	I0920 16:44:38.091773   16774 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-205029
	I0920 16:44:38.107666   16774 ssh_runner.go:195] Run: cat /version.json
	I0920 16:44:38.107722   16774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205029
	I0920 16:44:38.107737   16774 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 16:44:38.107810   16774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205029
	I0920 16:44:38.125566   16774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/addons-205029/id_rsa Username:docker}
	I0920 16:44:38.126841   16774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/addons-205029/id_rsa Username:docker}
	I0920 16:44:38.285838   16774 ssh_runner.go:195] Run: systemctl --version
	I0920 16:44:38.289936   16774 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0920 16:44:38.293954   16774 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0920 16:44:38.316287   16774 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0920 16:44:38.316343   16774 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 16:44:38.341889   16774 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0920 16:44:38.341916   16774 start.go:495] detecting cgroup driver to use...
	I0920 16:44:38.341946   16774 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0920 16:44:38.342058   16774 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 16:44:38.356646   16774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0920 16:44:38.365911   16774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0920 16:44:38.375287   16774 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0920 16:44:38.375345   16774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0920 16:44:38.384926   16774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 16:44:38.394150   16774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0920 16:44:38.403040   16774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 16:44:38.412257   16774 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 16:44:38.421005   16774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0920 16:44:38.430219   16774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0920 16:44:38.439566   16774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0920 16:44:38.448832   16774 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 16:44:38.456655   16774 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 16:44:38.456712   16774 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 16:44:38.469843   16774 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 16:44:38.478161   16774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 16:44:38.553464   16774 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0920 16:44:38.641249   16774 start.go:495] detecting cgroup driver to use...
	I0920 16:44:38.641297   16774 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0920 16:44:38.641336   16774 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0920 16:44:38.652129   16774 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0920 16:44:38.652189   16774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0920 16:44:38.663581   16774 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 16:44:38.679578   16774 ssh_runner.go:195] Run: which cri-dockerd
	I0920 16:44:38.682948   16774 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0920 16:44:38.692617   16774 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0920 16:44:38.711087   16774 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0920 16:44:38.790494   16774 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0920 16:44:38.885768   16774 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0920 16:44:38.885894   16774 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0920 16:44:38.902460   16774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 16:44:38.982503   16774 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0920 16:44:39.237549   16774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0920 16:44:39.248309   16774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0920 16:44:39.259122   16774 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0920 16:44:39.336409   16774 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0920 16:44:39.411133   16774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 16:44:39.487678   16774 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0920 16:44:39.499466   16774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0920 16:44:39.508899   16774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 16:44:39.584378   16774 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0920 16:44:39.644513   16774 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0920 16:44:39.644596   16774 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0920 16:44:39.648006   16774 start.go:563] Will wait 60s for crictl version
	I0920 16:44:39.648048   16774 ssh_runner.go:195] Run: which crictl
	I0920 16:44:39.651108   16774 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 16:44:39.681795   16774 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.0
	RuntimeApiVersion:  v1
	I0920 16:44:39.681855   16774 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0920 16:44:39.704047   16774 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0920 16:44:39.730171   16774 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.0 ...
	I0920 16:44:39.730250   16774 cli_runner.go:164] Run: docker network inspect addons-205029 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0920 16:44:39.747110   16774 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0920 16:44:39.750457   16774 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 16:44:39.760226   16774 kubeadm.go:883] updating cluster {Name:addons-205029 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-205029 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 16:44:39.760331   16774 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 16:44:39.760376   16774 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0920 16:44:39.778278   16774 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0920 16:44:39.778298   16774 docker.go:615] Images already preloaded, skipping extraction
	I0920 16:44:39.778356   16774 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0920 16:44:39.796624   16774 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0920 16:44:39.796666   16774 cache_images.go:84] Images are preloaded, skipping loading
	I0920 16:44:39.796676   16774 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 docker true true} ...
	I0920 16:44:39.796772   16774 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-205029 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-205029 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 16:44:39.796836   16774 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0920 16:44:39.839054   16774 cni.go:84] Creating CNI manager for ""
	I0920 16:44:39.839088   16774 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 16:44:39.839098   16774 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 16:44:39.839117   16774 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-205029 NodeName:addons-205029 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 16:44:39.839235   16774 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-205029"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 16:44:39.839287   16774 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 16:44:39.847381   16774 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 16:44:39.847443   16774 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 16:44:39.855519   16774 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0920 16:44:39.870882   16774 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 16:44:39.886343   16774 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0920 16:44:39.902318   16774 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0920 16:44:39.905578   16774 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 16:44:39.915155   16774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 16:44:39.989270   16774 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 16:44:40.001702   16774 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029 for IP: 192.168.49.2
	I0920 16:44:40.001723   16774 certs.go:194] generating shared ca certs ...
	I0920 16:44:40.001745   16774 certs.go:226] acquiring lock for ca certs: {Name:mk7859bcc6bcc87de2e2da04bdba4ac21b3ab143 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:40.001867   16774 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-8616/.minikube/ca.key
	I0920 16:44:40.249259   16774 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8616/.minikube/ca.crt ...
	I0920 16:44:40.249287   16774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8616/.minikube/ca.crt: {Name:mk44a784a15cda94cf26c63cfd7e14aa1f1132b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:40.249459   16774 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8616/.minikube/ca.key ...
	I0920 16:44:40.249471   16774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8616/.minikube/ca.key: {Name:mkfca71425b22ed5e73544af15493c3cf339d073 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:40.249541   16774 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-8616/.minikube/proxy-client-ca.key
	I0920 16:44:40.404491   16774 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8616/.minikube/proxy-client-ca.crt ...
	I0920 16:44:40.404519   16774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8616/.minikube/proxy-client-ca.crt: {Name:mk78c3531f6cec4a6da2c3ff045ac0c1be8662b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:40.404677   16774 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8616/.minikube/proxy-client-ca.key ...
	I0920 16:44:40.404688   16774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8616/.minikube/proxy-client-ca.key: {Name:mk47981bbe3a26551f13bf7ccae25f4674a14e13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:40.404765   16774 certs.go:256] generating profile certs ...
	I0920 16:44:40.404815   16774 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/client.key
	I0920 16:44:40.404826   16774 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/client.crt with IP's: []
	I0920 16:44:40.489727   16774 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/client.crt ...
	I0920 16:44:40.489760   16774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/client.crt: {Name:mk1cb9d534fa0209713ec74aa58d9a7a8da5c7e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:40.489932   16774 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/client.key ...
	I0920 16:44:40.489942   16774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/client.key: {Name:mkc57a397f86b96efb60565f7dfd38ac2ddd4de4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:40.490015   16774 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/apiserver.key.532cd76e
	I0920 16:44:40.490033   16774 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/apiserver.crt.532cd76e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0920 16:44:40.666783   16774 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/apiserver.crt.532cd76e ...
	I0920 16:44:40.666814   16774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/apiserver.crt.532cd76e: {Name:mkc84256309f9bc8986ecaf3e3ff5e2e1ceb68a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:40.666989   16774 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/apiserver.key.532cd76e ...
	I0920 16:44:40.667002   16774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/apiserver.key.532cd76e: {Name:mkca7096c26a1de58e29d211308975f671f2b850 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:40.667074   16774 certs.go:381] copying /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/apiserver.crt.532cd76e -> /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/apiserver.crt
	I0920 16:44:40.667144   16774 certs.go:385] copying /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/apiserver.key.532cd76e -> /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/apiserver.key
	I0920 16:44:40.667196   16774 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/proxy-client.key
	I0920 16:44:40.667214   16774 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/proxy-client.crt with IP's: []
	I0920 16:44:40.752763   16774 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/proxy-client.crt ...
	I0920 16:44:40.752794   16774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/proxy-client.crt: {Name:mk050a31d02d8979f4fe0e44c7f315005f69edf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:40.752957   16774 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/proxy-client.key ...
	I0920 16:44:40.752969   16774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/proxy-client.key: {Name:mkc9a5c83731e76d42012d2048235cd283ee8d00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:40.753118   16774 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8616/.minikube/certs/ca-key.pem (1679 bytes)
	I0920 16:44:40.753151   16774 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8616/.minikube/certs/ca.pem (1082 bytes)
	I0920 16:44:40.753174   16774 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8616/.minikube/certs/cert.pem (1123 bytes)
	I0920 16:44:40.753195   16774 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8616/.minikube/certs/key.pem (1679 bytes)
	I0920 16:44:40.753729   16774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8616/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 16:44:40.775680   16774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8616/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 16:44:40.797382   16774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8616/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 16:44:40.819954   16774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8616/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 16:44:40.841745   16774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0920 16:44:40.863159   16774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 16:44:40.884669   16774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 16:44:40.905865   16774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 16:44:40.928562   16774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8616/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 16:44:40.950437   16774 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 16:44:40.966146   16774 ssh_runner.go:195] Run: openssl version
	I0920 16:44:40.971368   16774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 16:44:40.980403   16774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 16:44:40.983824   16774 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0920 16:44:40.983880   16774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 16:44:40.990237   16774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 16:44:40.999014   16774 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 16:44:41.002181   16774 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 16:44:41.002225   16774 kubeadm.go:392] StartCluster: {Name:addons-205029 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-205029 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 16:44:41.002316   16774 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0920 16:44:41.019562   16774 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 16:44:41.028127   16774 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 16:44:41.036685   16774 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0920 16:44:41.036754   16774 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 16:44:41.045196   16774 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 16:44:41.045220   16774 kubeadm.go:157] found existing configuration files:
	
	I0920 16:44:41.045270   16774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 16:44:41.053705   16774 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 16:44:41.053760   16774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 16:44:41.062016   16774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 16:44:41.070473   16774 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 16:44:41.070535   16774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 16:44:41.078479   16774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 16:44:41.086908   16774 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 16:44:41.086995   16774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 16:44:41.095025   16774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 16:44:41.103447   16774 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 16:44:41.103518   16774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 16:44:41.111319   16774 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0920 16:44:41.147076   16774 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 16:44:41.147152   16774 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 16:44:41.166636   16774 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0920 16:44:41.166726   16774 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1069-gcp
	I0920 16:44:41.166759   16774 kubeadm.go:310] OS: Linux
	I0920 16:44:41.166800   16774 kubeadm.go:310] CGROUPS_CPU: enabled
	I0920 16:44:41.166842   16774 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0920 16:44:41.166886   16774 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0920 16:44:41.166928   16774 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0920 16:44:41.166989   16774 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0920 16:44:41.167083   16774 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0920 16:44:41.167176   16774 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0920 16:44:41.167248   16774 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0920 16:44:41.167313   16774 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0920 16:44:41.214923   16774 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 16:44:41.215063   16774 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 16:44:41.215226   16774 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 16:44:41.224975   16774 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 16:44:41.228035   16774 out.go:235]   - Generating certificates and keys ...
	I0920 16:44:41.228137   16774 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 16:44:41.228198   16774 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 16:44:41.352731   16774 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 16:44:41.559862   16774 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0920 16:44:41.760049   16774 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0920 16:44:41.947017   16774 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0920 16:44:42.023472   16774 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0920 16:44:42.023634   16774 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-205029 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0920 16:44:42.210939   16774 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0920 16:44:42.211100   16774 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-205029 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0920 16:44:42.399366   16774 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0920 16:44:42.617900   16774 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0920 16:44:42.701698   16774 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0920 16:44:42.701792   16774 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 16:44:42.814142   16774 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 16:44:42.955822   16774 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 16:44:43.055761   16774 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 16:44:43.154415   16774 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 16:44:43.366002   16774 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 16:44:43.366399   16774 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 16:44:43.368826   16774 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 16:44:43.371120   16774 out.go:235]   - Booting up control plane ...
	I0920 16:44:43.371226   16774 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 16:44:43.371305   16774 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 16:44:43.371390   16774 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 16:44:43.380738   16774 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 16:44:43.386150   16774 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 16:44:43.386221   16774 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 16:44:43.467512   16774 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 16:44:43.467659   16774 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 16:44:43.968987   16774 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.598165ms
	I0920 16:44:43.969098   16774 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 16:44:48.471004   16774 kubeadm.go:310] [api-check] The API server is healthy after 4.501946786s
	I0920 16:44:48.482108   16774 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 16:44:48.492370   16774 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 16:44:48.509047   16774 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 16:44:48.509312   16774 kubeadm.go:310] [mark-control-plane] Marking the node addons-205029 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 16:44:48.516108   16774 kubeadm.go:310] [bootstrap-token] Using token: ss9buj.0c6u12p1td4a48ak
	I0920 16:44:48.517562   16774 out.go:235]   - Configuring RBAC rules ...
	I0920 16:44:48.517706   16774 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 16:44:48.520397   16774 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 16:44:48.526073   16774 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 16:44:48.528367   16774 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 16:44:48.530575   16774 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 16:44:48.533852   16774 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 16:44:48.876548   16774 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 16:44:49.299932   16774 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 16:44:49.877603   16774 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 16:44:49.878389   16774 kubeadm.go:310] 
	I0920 16:44:49.878480   16774 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 16:44:49.878492   16774 kubeadm.go:310] 
	I0920 16:44:49.878586   16774 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 16:44:49.878595   16774 kubeadm.go:310] 
	I0920 16:44:49.878623   16774 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 16:44:49.878726   16774 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 16:44:49.878815   16774 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 16:44:49.878825   16774 kubeadm.go:310] 
	I0920 16:44:49.878961   16774 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 16:44:49.879013   16774 kubeadm.go:310] 
	I0920 16:44:49.879087   16774 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 16:44:49.879097   16774 kubeadm.go:310] 
	I0920 16:44:49.879178   16774 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 16:44:49.879289   16774 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 16:44:49.879380   16774 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 16:44:49.879392   16774 kubeadm.go:310] 
	I0920 16:44:49.879514   16774 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 16:44:49.879621   16774 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 16:44:49.879641   16774 kubeadm.go:310] 
	I0920 16:44:49.879756   16774 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ss9buj.0c6u12p1td4a48ak \
	I0920 16:44:49.879883   16774 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:240c065d4f95c9bb5d28e0d1bbd6719e72d2976d0c827c563409b1a9ab5915cb \
	I0920 16:44:49.879928   16774 kubeadm.go:310] 	--control-plane 
	I0920 16:44:49.879941   16774 kubeadm.go:310] 
	I0920 16:44:49.880015   16774 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 16:44:49.880021   16774 kubeadm.go:310] 
	I0920 16:44:49.880092   16774 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ss9buj.0c6u12p1td4a48ak \
	I0920 16:44:49.880187   16774 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:240c065d4f95c9bb5d28e0d1bbd6719e72d2976d0c827c563409b1a9ab5915cb 
	I0920 16:44:49.881503   16774 kubeadm.go:310] W0920 16:44:41.144356    1919 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 16:44:49.881808   16774 kubeadm.go:310] W0920 16:44:41.144993    1919 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 16:44:49.882016   16774 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1069-gcp\n", err: exit status 1
	I0920 16:44:49.882108   16774 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 16:44:49.882129   16774 cni.go:84] Creating CNI manager for ""
	I0920 16:44:49.882144   16774 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 16:44:49.884169   16774 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 16:44:49.885691   16774 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 16:44:49.893901   16774 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 16:44:49.909805   16774 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 16:44:49.909868   16774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 16:44:49.909882   16774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-205029 minikube.k8s.io/updated_at=2024_09_20T16_44_49_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1 minikube.k8s.io/name=addons-205029 minikube.k8s.io/primary=true
	I0920 16:44:49.916704   16774 ops.go:34] apiserver oom_adj: -16
	I0920 16:44:49.990468   16774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 16:44:50.491296   16774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 16:44:50.990755   16774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 16:44:51.491007   16774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 16:44:51.990868   16774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 16:44:52.490655   16774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 16:44:52.991377   16774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 16:44:53.491475   16774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 16:44:53.991454   16774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 16:44:54.491580   16774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 16:44:54.552978   16774 kubeadm.go:1113] duration metric: took 4.643164069s to wait for elevateKubeSystemPrivileges
	I0920 16:44:54.553011   16774 kubeadm.go:394] duration metric: took 13.550789888s to StartCluster
	I0920 16:44:54.553028   16774 settings.go:142] acquiring lock: {Name:mk0bd30b070fa56866482d504f296479e9d1b0fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:54.553128   16774 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19672-8616/kubeconfig
	I0920 16:44:54.553544   16774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8616/kubeconfig: {Name:mk17e3b05f62f29ee13b5427250b308800e65dd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:54.553751   16774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0920 16:44:54.553747   16774 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 16:44:54.553771   16774 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0920 16:44:54.553866   16774 addons.go:69] Setting volumesnapshots=true in profile "addons-205029"
	I0920 16:44:54.553871   16774 addons.go:69] Setting gcp-auth=true in profile "addons-205029"
	I0920 16:44:54.553873   16774 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-205029"
	I0920 16:44:54.553885   16774 addons.go:234] Setting addon volumesnapshots=true in "addons-205029"
	I0920 16:44:54.553884   16774 addons.go:69] Setting default-storageclass=true in profile "addons-205029"
	I0920 16:44:54.553892   16774 mustload.go:65] Loading cluster: addons-205029
	I0920 16:44:54.553888   16774 addons.go:69] Setting metrics-server=true in profile "addons-205029"
	I0920 16:44:54.553900   16774 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-205029"
	I0920 16:44:54.553903   16774 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-205029"
	I0920 16:44:54.553914   16774 addons.go:234] Setting addon metrics-server=true in "addons-205029"
	I0920 16:44:54.553914   16774 host.go:66] Checking if "addons-205029" exists ...
	I0920 16:44:54.553944   16774 host.go:66] Checking if "addons-205029" exists ...
	I0920 16:44:54.554090   16774 config.go:182] Loaded profile config "addons-205029": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 16:44:54.553839   16774 addons.go:69] Setting cloud-spanner=true in profile "addons-205029"
	I0920 16:44:54.554174   16774 addons.go:234] Setting addon cloud-spanner=true in "addons-205029"
	I0920 16:44:54.554203   16774 host.go:66] Checking if "addons-205029" exists ...
	I0920 16:44:54.554254   16774 cli_runner.go:164] Run: docker container inspect addons-205029 --format={{.State.Status}}
	I0920 16:44:54.554269   16774 addons.go:69] Setting storage-provisioner=true in profile "addons-205029"
	I0920 16:44:54.554283   16774 addons.go:234] Setting addon storage-provisioner=true in "addons-205029"
	I0920 16:44:54.554305   16774 host.go:66] Checking if "addons-205029" exists ...
	I0920 16:44:54.554326   16774 cli_runner.go:164] Run: docker container inspect addons-205029 --format={{.State.Status}}
	I0920 16:44:54.554411   16774 cli_runner.go:164] Run: docker container inspect addons-205029 --format={{.State.Status}}
	I0920 16:44:54.554464   16774 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-205029"
	I0920 16:44:54.554485   16774 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-205029"
	I0920 16:44:54.554506   16774 host.go:66] Checking if "addons-205029" exists ...
	I0920 16:44:54.554622   16774 config.go:182] Loaded profile config "addons-205029": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 16:44:54.554660   16774 cli_runner.go:164] Run: docker container inspect addons-205029 --format={{.State.Status}}
	I0920 16:44:54.554254   16774 cli_runner.go:164] Run: docker container inspect addons-205029 --format={{.State.Status}}
	I0920 16:44:54.554744   16774 cli_runner.go:164] Run: docker container inspect addons-205029 --format={{.State.Status}}
	I0920 16:44:54.554445   16774 cli_runner.go:164] Run: docker container inspect addons-205029 --format={{.State.Status}}
	I0920 16:44:54.554926   16774 cli_runner.go:164] Run: docker container inspect addons-205029 --format={{.State.Status}}
	I0920 16:44:54.555097   16774 addons.go:69] Setting inspektor-gadget=true in profile "addons-205029"
	I0920 16:44:54.555127   16774 addons.go:234] Setting addon inspektor-gadget=true in "addons-205029"
	I0920 16:44:54.555166   16774 host.go:66] Checking if "addons-205029" exists ...
	I0920 16:44:54.553851   16774 addons.go:69] Setting ingress=true in profile "addons-205029"
	I0920 16:44:54.555487   16774 addons.go:234] Setting addon ingress=true in "addons-205029"
	I0920 16:44:54.555546   16774 host.go:66] Checking if "addons-205029" exists ...
	I0920 16:44:54.555644   16774 cli_runner.go:164] Run: docker container inspect addons-205029 --format={{.State.Status}}
	I0920 16:44:54.553851   16774 addons.go:69] Setting registry=true in profile "addons-205029"
	I0920 16:44:54.556041   16774 addons.go:234] Setting addon registry=true in "addons-205029"
	I0920 16:44:54.556074   16774 cli_runner.go:164] Run: docker container inspect addons-205029 --format={{.State.Status}}
	I0920 16:44:54.556084   16774 host.go:66] Checking if "addons-205029" exists ...
	I0920 16:44:54.553858   16774 addons.go:69] Setting volcano=true in profile "addons-205029"
	I0920 16:44:54.556200   16774 addons.go:234] Setting addon volcano=true in "addons-205029"
	I0920 16:44:54.556256   16774 host.go:66] Checking if "addons-205029" exists ...
	I0920 16:44:54.553862   16774 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-205029"
	I0920 16:44:54.556415   16774 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-205029"
	I0920 16:44:54.556466   16774 host.go:66] Checking if "addons-205029" exists ...
	I0920 16:44:54.553848   16774 addons.go:69] Setting yakd=true in profile "addons-205029"
	I0920 16:44:54.556622   16774 addons.go:234] Setting addon yakd=true in "addons-205029"
	I0920 16:44:54.556650   16774 host.go:66] Checking if "addons-205029" exists ...
	I0920 16:44:54.557393   16774 out.go:177] * Verifying Kubernetes components...
	I0920 16:44:54.559186   16774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 16:44:54.553861   16774 addons.go:69] Setting ingress-dns=true in profile "addons-205029"
	I0920 16:44:54.559363   16774 addons.go:234] Setting addon ingress-dns=true in "addons-205029"
	I0920 16:44:54.559402   16774 host.go:66] Checking if "addons-205029" exists ...
	I0920 16:44:54.559886   16774 cli_runner.go:164] Run: docker container inspect addons-205029 --format={{.State.Status}}
	I0920 16:44:54.600682   16774 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-205029"
	I0920 16:44:54.600730   16774 host.go:66] Checking if "addons-205029" exists ...
	I0920 16:44:54.601198   16774 cli_runner.go:164] Run: docker container inspect addons-205029 --format={{.State.Status}}
	I0920 16:44:54.603185   16774 host.go:66] Checking if "addons-205029" exists ...
	I0920 16:44:54.609037   16774 cli_runner.go:164] Run: docker container inspect addons-205029 --format={{.State.Status}}
	I0920 16:44:54.609536   16774 cli_runner.go:164] Run: docker container inspect addons-205029 --format={{.State.Status}}
	I0920 16:44:54.609823   16774 cli_runner.go:164] Run: docker container inspect addons-205029 --format={{.State.Status}}
	I0920 16:44:54.615769   16774 cli_runner.go:164] Run: docker container inspect addons-205029 --format={{.State.Status}}
	I0920 16:44:54.629928   16774 addons.go:234] Setting addon default-storageclass=true in "addons-205029"
	I0920 16:44:54.629970   16774 host.go:66] Checking if "addons-205029" exists ...
	I0920 16:44:54.630369   16774 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0920 16:44:54.630398   16774 cli_runner.go:164] Run: docker container inspect addons-205029 --format={{.State.Status}}
	I0920 16:44:54.632905   16774 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0920 16:44:54.632935   16774 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0920 16:44:54.633003   16774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205029
	I0920 16:44:54.630372   16774 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0920 16:44:54.636742   16774 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 16:44:54.630369   16774 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0920 16:44:54.639404   16774 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 16:44:54.639438   16774 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 16:44:54.639503   16774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205029
	I0920 16:44:54.639822   16774 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0920 16:44:54.639839   16774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0920 16:44:54.639884   16774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205029
	I0920 16:44:54.639822   16774 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 16:44:54.639910   16774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 16:44:54.639955   16774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205029
	I0920 16:44:54.664138   16774 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0920 16:44:54.665947   16774 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0920 16:44:54.665972   16774 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0920 16:44:54.666034   16774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205029
	I0920 16:44:54.666262   16774 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0920 16:44:54.667785   16774 out.go:177]   - Using image docker.io/busybox:stable
	I0920 16:44:54.669125   16774 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0920 16:44:54.669240   16774 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 16:44:54.669260   16774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0920 16:44:54.669314   16774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205029
	I0920 16:44:54.671379   16774 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 16:44:54.671396   16774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0920 16:44:54.671447   16774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205029
	I0920 16:44:54.672171   16774 out.go:177]   - Using image docker.io/registry:2.8.3
	I0920 16:44:54.673422   16774 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0920 16:44:54.674779   16774 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0920 16:44:54.674797   16774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0920 16:44:54.674850   16774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205029
	I0920 16:44:54.687147   16774 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 16:44:54.687200   16774 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0920 16:44:54.689674   16774 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0920 16:44:54.689899   16774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/addons-205029/id_rsa Username:docker}
	I0920 16:44:54.690098   16774 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 16:44:54.690113   16774 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 16:44:54.690163   16774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205029
	I0920 16:44:54.690323   16774 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 16:44:54.694932   16774 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0920 16:44:54.694931   16774 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0920 16:44:54.696248   16774 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0920 16:44:54.697074   16774 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0920 16:44:54.698339   16774 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0920 16:44:54.698353   16774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0920 16:44:54.698397   16774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205029
	I0920 16:44:54.698630   16774 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0920 16:44:54.699000   16774 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 16:44:54.699017   16774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0920 16:44:54.699172   16774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205029
	I0920 16:44:54.704934   16774 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0920 16:44:54.707745   16774 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0920 16:44:54.708023   16774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/addons-205029/id_rsa Username:docker}
	I0920 16:44:54.711160   16774 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0920 16:44:54.712657   16774 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0920 16:44:54.714010   16774 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0920 16:44:54.714396   16774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/addons-205029/id_rsa Username:docker}
	I0920 16:44:54.715447   16774 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0920 16:44:54.715469   16774 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0920 16:44:54.715535   16774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205029
	I0920 16:44:54.716387   16774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/addons-205029/id_rsa Username:docker}
	I0920 16:44:54.725182   16774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/addons-205029/id_rsa Username:docker}
	I0920 16:44:54.725514   16774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/addons-205029/id_rsa Username:docker}
	I0920 16:44:54.728493   16774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/addons-205029/id_rsa Username:docker}
	I0920 16:44:54.728917   16774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/addons-205029/id_rsa Username:docker}
	I0920 16:44:54.734670   16774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/addons-205029/id_rsa Username:docker}
	I0920 16:44:54.734725   16774 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0920 16:44:54.734847   16774 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0920 16:44:54.736133   16774 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0920 16:44:54.736156   16774 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0920 16:44:54.736216   16774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205029
	I0920 16:44:54.736288   16774 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 16:44:54.736298   16774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0920 16:44:54.736337   16774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205029
	I0920 16:44:54.739898   16774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/addons-205029/id_rsa Username:docker}
	I0920 16:44:54.755575   16774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/addons-205029/id_rsa Username:docker}
	I0920 16:44:54.762388   16774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/addons-205029/id_rsa Username:docker}
	I0920 16:44:54.776018   16774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/addons-205029/id_rsa Username:docker}
	W0920 16:44:54.778121   16774 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0920 16:44:54.778150   16774 retry.go:31] will retry after 329.660948ms: ssh: handshake failed: EOF
	I0920 16:44:54.778755   16774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/addons-205029/id_rsa Username:docker}
	W0920 16:44:54.845390   16774 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0920 16:44:54.845428   16774 retry.go:31] will retry after 287.554184ms: ssh: handshake failed: EOF
	I0920 16:44:55.043674   16774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0920 16:44:55.055027   16774 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 16:44:55.055102   16774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0920 16:44:55.065870   16774 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0920 16:44:55.065898   16774 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0920 16:44:55.146472   16774 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 16:44:55.146511   16774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0920 16:44:55.166900   16774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 16:44:55.245267   16774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 16:44:55.247824   16774 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0920 16:44:55.247901   16774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0920 16:44:55.251568   16774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0920 16:44:55.253883   16774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 16:44:55.255478   16774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 16:44:55.258841   16774 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0920 16:44:55.258922   16774 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0920 16:44:55.267822   16774 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0920 16:44:55.267854   16774 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0920 16:44:55.350723   16774 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0920 16:44:55.350822   16774 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0920 16:44:55.444717   16774 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 16:44:55.444746   16774 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 16:44:55.555680   16774 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0920 16:44:55.555766   16774 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0920 16:44:55.561613   16774 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0920 16:44:55.561689   16774 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0920 16:44:55.655606   16774 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0920 16:44:55.655661   16774 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0920 16:44:55.744116   16774 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0920 16:44:55.744206   16774 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0920 16:44:55.845794   16774 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0920 16:44:55.845819   16774 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0920 16:44:55.950542   16774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 16:44:55.960105   16774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0920 16:44:56.043678   16774 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 16:44:56.043769   16774 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 16:44:56.146727   16774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 16:44:56.152381   16774 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0920 16:44:56.152461   16774 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0920 16:44:56.162863   16774 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0920 16:44:56.162935   16774 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0920 16:44:56.248081   16774 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0920 16:44:56.248165   16774 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0920 16:44:56.263618   16774 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0920 16:44:56.263707   16774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0920 16:44:56.348527   16774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 16:44:56.668094   16774 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0920 16:44:56.668119   16774 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0920 16:44:56.745254   16774 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0920 16:44:56.745339   16774 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0920 16:44:56.854039   16774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0920 16:44:56.964754   16774 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0920 16:44:56.964785   16774 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0920 16:44:57.244950   16774 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0920 16:44:57.245034   16774 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0920 16:44:57.362038   16774 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.318267333s)
	I0920 16:44:57.362174   16774 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.307004418s)
	I0920 16:44:57.362225   16774 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0920 16:44:57.363554   16774 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.308446195s)
	I0920 16:44:57.363756   16774 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.196822695s)
	I0920 16:44:57.364839   16774 node_ready.go:35] waiting up to 6m0s for node "addons-205029" to be "Ready" ...
	I0920 16:44:57.449213   16774 node_ready.go:49] node "addons-205029" has status "Ready":"True"
	I0920 16:44:57.449248   16774 node_ready.go:38] duration metric: took 84.345171ms for node "addons-205029" to be "Ready" ...
	I0920 16:44:57.449260   16774 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 16:44:57.458128   16774 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hj9fq" in "kube-system" namespace to be "Ready" ...
	I0920 16:44:57.458187   16774 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0920 16:44:57.458317   16774 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0920 16:44:57.551583   16774 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 16:44:57.551699   16774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0920 16:44:57.745830   16774 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0920 16:44:57.745908   16774 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0920 16:44:57.866052   16774 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-205029" context rescaled to 1 replicas
	I0920 16:44:57.943880   16774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 16:44:57.946121   16774 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0920 16:44:57.946144   16774 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0920 16:44:58.247147   16774 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0920 16:44:58.247214   16774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0920 16:44:58.358845   16774 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 16:44:58.358878   16774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0920 16:44:58.747645   16774 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0920 16:44:58.747677   16774 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0920 16:44:58.950800   16774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 16:44:59.063415   16774 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0920 16:44:59.063451   16774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0920 16:44:59.155782   16774 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.910427243s)
	I0920 16:44:59.547844   16774 pod_ready.go:103] pod "coredns-7c65d6cfc9-hj9fq" in "kube-system" namespace has status "Ready":"False"
	I0920 16:44:59.646089   16774 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0920 16:44:59.646173   16774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0920 16:45:00.247595   16774 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 16:45:00.247876   16774 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0920 16:45:00.745083   16774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 16:45:01.651758   16774 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0920 16:45:01.651862   16774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205029
	I0920 16:45:01.677078   16774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/addons-205029/id_rsa Username:docker}
	I0920 16:45:02.048960   16774 pod_ready.go:103] pod "coredns-7c65d6cfc9-hj9fq" in "kube-system" namespace has status "Ready":"False"
	I0920 16:45:02.344103   16774 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0920 16:45:02.565787   16774 addons.go:234] Setting addon gcp-auth=true in "addons-205029"
	I0920 16:45:02.565847   16774 host.go:66] Checking if "addons-205029" exists ...
	I0920 16:45:02.566364   16774 cli_runner.go:164] Run: docker container inspect addons-205029 --format={{.State.Status}}
	I0920 16:45:02.585067   16774 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0920 16:45:02.585116   16774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205029
	I0920 16:45:02.603463   16774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/addons-205029/id_rsa Username:docker}
	I0920 16:45:04.553643   16774 pod_ready.go:103] pod "coredns-7c65d6cfc9-hj9fq" in "kube-system" namespace has status "Ready":"False"
	I0920 16:45:06.346458   16774 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (11.094783278s)
	I0920 16:45:06.346646   16774 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (11.092731805s)
	I0920 16:45:06.346717   16774 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (11.091168155s)
	I0920 16:45:06.346878   16774 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (10.396256957s)
	I0920 16:45:06.346913   16774 addons.go:475] Verifying addon ingress=true in "addons-205029"
	I0920 16:45:06.347287   16774 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (10.387097277s)
	I0920 16:45:06.347371   16774 addons.go:475] Verifying addon registry=true in "addons-205029"
	I0920 16:45:06.347399   16774 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (10.200581354s)
	I0920 16:45:06.347519   16774 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.998901085s)
	I0920 16:45:06.347535   16774 addons.go:475] Verifying addon metrics-server=true in "addons-205029"
	I0920 16:45:06.347583   16774 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (9.493511884s)
	I0920 16:45:06.347766   16774 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.403844047s)
	W0920 16:45:06.348818   16774 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 16:45:06.348846   16774 retry.go:31] will retry after 357.517696ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 16:45:06.347848   16774 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.397016446s)
	I0920 16:45:06.349320   16774 out.go:177] * Verifying ingress addon...
	I0920 16:45:06.349334   16774 out.go:177] * Verifying registry addon...
	I0920 16:45:06.350499   16774 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-205029 service yakd-dashboard -n yakd-dashboard
	
	I0920 16:45:06.352647   16774 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0920 16:45:06.353760   16774 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0920 16:45:06.360634   16774 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0920 16:45:06.360714   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:06.361206   16774 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0920 16:45:06.361232   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:06.707388   16774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 16:45:06.869584   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:06.869794   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:07.045928   16774 pod_ready.go:103] pod "coredns-7c65d6cfc9-hj9fq" in "kube-system" namespace has status "Ready":"False"
	I0920 16:45:07.357132   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:07.358678   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:07.863233   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:07.863730   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:08.054087   16774 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.308897545s)
	I0920 16:45:08.054130   16774 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-205029"
	I0920 16:45:08.054161   16774 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.46906424s)
	I0920 16:45:08.056202   16774 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 16:45:08.056216   16774 out.go:177] * Verifying csi-hostpath-driver addon...
	I0920 16:45:08.058227   16774 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0920 16:45:08.059155   16774 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0920 16:45:08.060062   16774 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0920 16:45:08.060084   16774 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0920 16:45:08.064438   16774 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0920 16:45:08.064470   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:08.145983   16774 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0920 16:45:08.146012   16774 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0920 16:45:08.169135   16774 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 16:45:08.169159   16774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0920 16:45:08.251668   16774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 16:45:08.357189   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:08.357755   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:08.565248   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:08.858283   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:08.858905   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:09.064976   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:09.145644   16774 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.438186914s)
	I0920 16:45:09.357661   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:09.358063   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:09.464792   16774 pod_ready.go:103] pod "coredns-7c65d6cfc9-hj9fq" in "kube-system" namespace has status "Ready":"False"
	I0920 16:45:09.564001   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:09.677182   16774 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.42547412s)
	I0920 16:45:09.679810   16774 addons.go:475] Verifying addon gcp-auth=true in "addons-205029"
	I0920 16:45:09.681561   16774 out.go:177] * Verifying gcp-auth addon...
	I0920 16:45:09.684026   16774 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0920 16:45:09.744334   16774 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0920 16:45:09.857266   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:09.857540   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:09.964906   16774 pod_ready.go:98] pod "coredns-7c65d6cfc9-hj9fq" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 16:45:09 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 16:44:54 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 16:44:54 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 16:44:54 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 16:44:54 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.2 HostIPs:[{IP:192.168.49.2
}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-20 16:44:54 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-20 16:44:57 +0000 UTC,FinishedAt:2024-09-20 16:45:08 +0000 UTC,ContainerID:docker://1fd08be6a3b1fe44a3d403c64981a8e735ad763b8b05b0ce9e44829439e71495,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://1fd08be6a3b1fe44a3d403c64981a8e735ad763b8b05b0ce9e44829439e71495 Started:0xc0022abf00 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0020d3740} {Name:kube-api-access-jc4d2 MountPath:/var/run/secrets/kubernetes.io/serviceaccount
ReadOnly:true RecursiveReadOnly:0xc0020d3750}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0920 16:45:09.964930   16774 pod_ready.go:82] duration metric: took 12.50670797s for pod "coredns-7c65d6cfc9-hj9fq" in "kube-system" namespace to be "Ready" ...
	E0920 16:45:09.964941   16774 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-hj9fq" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 16:45:09 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 16:44:54 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 16:44:54 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 16:44:54 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 16:44:54 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.4
9.2 HostIPs:[{IP:192.168.49.2}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-20 16:44:54 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-20 16:44:57 +0000 UTC,FinishedAt:2024-09-20 16:45:08 +0000 UTC,ContainerID:docker://1fd08be6a3b1fe44a3d403c64981a8e735ad763b8b05b0ce9e44829439e71495,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://1fd08be6a3b1fe44a3d403c64981a8e735ad763b8b05b0ce9e44829439e71495 Started:0xc0022abf00 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0020d3740} {Name:kube-api-access-jc4d2 MountPath:/var/run/secrets
/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc0020d3750}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0920 16:45:09.964951   16774 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-zsdfb" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:10.063304   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:10.356733   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:10.356848   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:10.563777   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:10.856662   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:10.856665   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:11.064149   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:11.357048   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:11.357123   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:11.564732   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:11.856680   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:11.856831   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:11.970531   16774 pod_ready.go:103] pod "coredns-7c65d6cfc9-zsdfb" in "kube-system" namespace has status "Ready":"False"
	I0920 16:45:12.064509   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:12.357881   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:12.358788   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:12.564051   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:12.857346   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:12.858339   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:13.063942   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:13.356739   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:13.517937   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:13.562888   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:13.856664   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:13.856711   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:14.063265   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:14.356509   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:14.357078   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:14.471213   16774 pod_ready.go:103] pod "coredns-7c65d6cfc9-zsdfb" in "kube-system" namespace has status "Ready":"False"
	I0920 16:45:14.563937   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:14.856959   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:14.857212   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:15.063518   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:15.356394   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:15.356673   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:15.562928   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:15.856714   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:15.857004   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:16.064244   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:16.356955   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:16.358161   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:16.563838   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:16.857636   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:16.857903   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:16.970693   16774 pod_ready.go:103] pod "coredns-7c65d6cfc9-zsdfb" in "kube-system" namespace has status "Ready":"False"
	I0920 16:45:17.063464   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:17.356583   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:17.356820   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:17.563705   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:17.856472   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:17.856758   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:18.062585   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:18.356806   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:18.356868   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:18.563163   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:18.857541   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:18.858596   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:19.063440   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:19.356634   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:19.356819   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:19.470369   16774 pod_ready.go:103] pod "coredns-7c65d6cfc9-zsdfb" in "kube-system" namespace has status "Ready":"False"
	I0920 16:45:19.564760   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:19.856401   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:19.856701   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:20.063957   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:20.357484   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:20.357803   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:20.563889   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:20.857385   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:20.857752   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:21.062940   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:21.356983   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:21.357023   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:21.471566   16774 pod_ready.go:103] pod "coredns-7c65d6cfc9-zsdfb" in "kube-system" namespace has status "Ready":"False"
	I0920 16:45:21.565784   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:21.856837   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:21.857201   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:22.064427   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:22.356428   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:22.356626   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:22.563675   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:22.857345   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:22.857957   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:23.064043   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:23.357127   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:23.357292   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:23.563581   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:23.856217   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:23.856986   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:23.974217   16774 pod_ready.go:103] pod "coredns-7c65d6cfc9-zsdfb" in "kube-system" namespace has status "Ready":"False"
	I0920 16:45:24.063121   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:24.357632   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:24.358463   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:24.564260   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:24.857612   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:24.858618   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:25.063619   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:25.356581   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:25.356726   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:25.562929   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:25.856648   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:25.857164   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:26.064029   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:26.356548   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:26.356677   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:26.470019   16774 pod_ready.go:103] pod "coredns-7c65d6cfc9-zsdfb" in "kube-system" namespace has status "Ready":"False"
	I0920 16:45:26.563369   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:26.856686   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:26.856869   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:27.063307   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:27.356446   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:27.356673   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:27.563665   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:27.856663   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:27.856885   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:28.063787   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:28.356807   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:28.357526   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:28.563791   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:28.856777   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:28.857009   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:28.970845   16774 pod_ready.go:103] pod "coredns-7c65d6cfc9-zsdfb" in "kube-system" namespace has status "Ready":"False"
	I0920 16:45:29.063611   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:29.356884   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:29.357078   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:29.563704   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:29.856412   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:29.856589   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:30.063364   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:30.356655   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:30.356798   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:30.564172   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:30.857185   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:30.857809   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:30.971069   16774 pod_ready.go:103] pod "coredns-7c65d6cfc9-zsdfb" in "kube-system" namespace has status "Ready":"False"
	I0920 16:45:31.064040   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:31.356967   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:31.357114   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:31.563136   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:31.856914   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:31.857069   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:32.063465   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:32.357188   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:32.357241   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:32.564182   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:32.856910   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:32.857092   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:33.064520   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:33.356335   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:33.356753   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:33.471255   16774 pod_ready.go:103] pod "coredns-7c65d6cfc9-zsdfb" in "kube-system" namespace has status "Ready":"False"
	I0920 16:45:33.563215   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:33.856488   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:33.857248   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:34.064280   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:34.356859   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:34.357362   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:34.563958   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:34.856934   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:34.857222   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:35.064443   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:35.356391   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:35.356420   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:35.564173   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:35.856514   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:35.856656   16774 kapi.go:107] duration metric: took 29.502896632s to wait for kubernetes.io/minikube-addons=registry ...
	I0920 16:45:35.970452   16774 pod_ready.go:93] pod "coredns-7c65d6cfc9-zsdfb" in "kube-system" namespace has status "Ready":"True"
	I0920 16:45:35.970481   16774 pod_ready.go:82] duration metric: took 26.005522531s for pod "coredns-7c65d6cfc9-zsdfb" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:35.970496   16774 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-205029" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:35.976916   16774 pod_ready.go:93] pod "etcd-addons-205029" in "kube-system" namespace has status "Ready":"True"
	I0920 16:45:35.976940   16774 pod_ready.go:82] duration metric: took 6.435502ms for pod "etcd-addons-205029" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:35.976953   16774 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-205029" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:35.982502   16774 pod_ready.go:93] pod "kube-apiserver-addons-205029" in "kube-system" namespace has status "Ready":"True"
	I0920 16:45:35.982523   16774 pod_ready.go:82] duration metric: took 5.563544ms for pod "kube-apiserver-addons-205029" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:35.982533   16774 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-205029" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:35.987118   16774 pod_ready.go:93] pod "kube-controller-manager-addons-205029" in "kube-system" namespace has status "Ready":"True"
	I0920 16:45:35.987140   16774 pod_ready.go:82] duration metric: took 4.599853ms for pod "kube-controller-manager-addons-205029" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:35.987152   16774 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-m6rvs" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:35.991494   16774 pod_ready.go:93] pod "kube-proxy-m6rvs" in "kube-system" namespace has status "Ready":"True"
	I0920 16:45:35.991520   16774 pod_ready.go:82] duration metric: took 4.359262ms for pod "kube-proxy-m6rvs" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:35.991532   16774 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-205029" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:36.063857   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:36.357630   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:36.368906   16774 pod_ready.go:93] pod "kube-scheduler-addons-205029" in "kube-system" namespace has status "Ready":"True"
	I0920 16:45:36.368938   16774 pod_ready.go:82] duration metric: took 377.396539ms for pod "kube-scheduler-addons-205029" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:36.368976   16774 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-xpzd9" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:36.563592   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:36.768396   16774 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-xpzd9" in "kube-system" namespace has status "Ready":"True"
	I0920 16:45:36.768424   16774 pod_ready.go:82] duration metric: took 399.438014ms for pod "nvidia-device-plugin-daemonset-xpzd9" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:36.768432   16774 pod_ready.go:39] duration metric: took 39.319159915s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 16:45:36.768452   16774 api_server.go:52] waiting for apiserver process to appear ...
	I0920 16:45:36.768502   16774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 16:45:36.784595   16774 api_server.go:72] duration metric: took 42.230762976s to wait for apiserver process to appear ...
	I0920 16:45:36.784617   16774 api_server.go:88] waiting for apiserver healthz status ...
	I0920 16:45:36.784638   16774 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 16:45:36.789300   16774 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0920 16:45:36.790264   16774 api_server.go:141] control plane version: v1.31.1
	I0920 16:45:36.790288   16774 api_server.go:131] duration metric: took 5.665428ms to wait for apiserver health ...
	I0920 16:45:36.790297   16774 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 16:45:36.857027   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:36.975185   16774 system_pods.go:59] 17 kube-system pods found
	I0920 16:45:36.975218   16774 system_pods.go:61] "coredns-7c65d6cfc9-zsdfb" [726c17a6-7f53-49e4-ac8a-783182889340] Running
	I0920 16:45:36.975229   16774 system_pods.go:61] "csi-hostpath-attacher-0" [3c7a327a-8620-48ac-ab71-1dce4985efc8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0920 16:45:36.975240   16774 system_pods.go:61] "csi-hostpath-resizer-0" [8ad61db7-1573-41a0-bdbf-4409341769e8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0920 16:45:36.975250   16774 system_pods.go:61] "csi-hostpathplugin-f5rlb" [433d3846-18be-4200-81ee-9c1b69c03797] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0920 16:45:36.975257   16774 system_pods.go:61] "etcd-addons-205029" [da5fb10c-8086-498a-bda4-2f1cac80070e] Running
	I0920 16:45:36.975264   16774 system_pods.go:61] "kube-apiserver-addons-205029" [33309b9f-3d85-48c0-b656-51de82848533] Running
	I0920 16:45:36.975273   16774 system_pods.go:61] "kube-controller-manager-addons-205029" [1a0232fb-ffab-4e7a-88cf-c26f2c65aa24] Running
	I0920 16:45:36.975281   16774 system_pods.go:61] "kube-ingress-dns-minikube" [cf2b54b5-fa63-42a9-a833-af0242b4cb46] Running
	I0920 16:45:36.975290   16774 system_pods.go:61] "kube-proxy-m6rvs" [235e9e6f-4299-43f7-8b9e-8887ecb70cd5] Running
	I0920 16:45:36.975295   16774 system_pods.go:61] "kube-scheduler-addons-205029" [7dd9cb76-e503-4e1e-a4c6-bcf31e76e886] Running
	I0920 16:45:36.975307   16774 system_pods.go:61] "metrics-server-84c5f94fbc-44j97" [d4cc25a2-9517-4e7b-9fa5-57b6a061d910] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 16:45:36.975314   16774 system_pods.go:61] "nvidia-device-plugin-daemonset-xpzd9" [caf9d40a-dff4-4e28-b6c7-d185e6e30b5a] Running
	I0920 16:45:36.975323   16774 system_pods.go:61] "registry-66c9cd494c-2sstq" [67cce838-d446-44f8-90cb-4b7c286fcfcb] Running
	I0920 16:45:36.975328   16774 system_pods.go:61] "registry-proxy-r58ln" [243fbbcd-f60b-492a-ab03-a7425f4bce3b] Running
	I0920 16:45:36.975341   16774 system_pods.go:61] "snapshot-controller-56fcc65765-l8spt" [2770a7ca-b59f-4393-a8b8-a0380a26fc3c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 16:45:36.975361   16774 system_pods.go:61] "snapshot-controller-56fcc65765-lzk5g" [2fa9904e-14d6-4369-8c43-740334b4055f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 16:45:36.975369   16774 system_pods.go:61] "storage-provisioner" [a34eadc8-0330-4959-afc1-2093e6fc6774] Running
	I0920 16:45:36.975378   16774 system_pods.go:74] duration metric: took 185.075475ms to wait for pod list to return data ...
	I0920 16:45:36.975389   16774 default_sa.go:34] waiting for default service account to be created ...
	I0920 16:45:37.064100   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:37.168740   16774 default_sa.go:45] found service account: "default"
	I0920 16:45:37.168768   16774 default_sa.go:55] duration metric: took 193.368649ms for default service account to be created ...
	I0920 16:45:37.168779   16774 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 16:45:37.357286   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:37.375225   16774 system_pods.go:86] 17 kube-system pods found
	I0920 16:45:37.375254   16774 system_pods.go:89] "coredns-7c65d6cfc9-zsdfb" [726c17a6-7f53-49e4-ac8a-783182889340] Running
	I0920 16:45:37.375265   16774 system_pods.go:89] "csi-hostpath-attacher-0" [3c7a327a-8620-48ac-ab71-1dce4985efc8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0920 16:45:37.375273   16774 system_pods.go:89] "csi-hostpath-resizer-0" [8ad61db7-1573-41a0-bdbf-4409341769e8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0920 16:45:37.375284   16774 system_pods.go:89] "csi-hostpathplugin-f5rlb" [433d3846-18be-4200-81ee-9c1b69c03797] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0920 16:45:37.375290   16774 system_pods.go:89] "etcd-addons-205029" [da5fb10c-8086-498a-bda4-2f1cac80070e] Running
	I0920 16:45:37.375297   16774 system_pods.go:89] "kube-apiserver-addons-205029" [33309b9f-3d85-48c0-b656-51de82848533] Running
	I0920 16:45:37.375303   16774 system_pods.go:89] "kube-controller-manager-addons-205029" [1a0232fb-ffab-4e7a-88cf-c26f2c65aa24] Running
	I0920 16:45:37.375316   16774 system_pods.go:89] "kube-ingress-dns-minikube" [cf2b54b5-fa63-42a9-a833-af0242b4cb46] Running
	I0920 16:45:37.375322   16774 system_pods.go:89] "kube-proxy-m6rvs" [235e9e6f-4299-43f7-8b9e-8887ecb70cd5] Running
	I0920 16:45:37.375327   16774 system_pods.go:89] "kube-scheduler-addons-205029" [7dd9cb76-e503-4e1e-a4c6-bcf31e76e886] Running
	I0920 16:45:37.375334   16774 system_pods.go:89] "metrics-server-84c5f94fbc-44j97" [d4cc25a2-9517-4e7b-9fa5-57b6a061d910] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 16:45:37.375347   16774 system_pods.go:89] "nvidia-device-plugin-daemonset-xpzd9" [caf9d40a-dff4-4e28-b6c7-d185e6e30b5a] Running
	I0920 16:45:37.375351   16774 system_pods.go:89] "registry-66c9cd494c-2sstq" [67cce838-d446-44f8-90cb-4b7c286fcfcb] Running
	I0920 16:45:37.375354   16774 system_pods.go:89] "registry-proxy-r58ln" [243fbbcd-f60b-492a-ab03-a7425f4bce3b] Running
	I0920 16:45:37.375360   16774 system_pods.go:89] "snapshot-controller-56fcc65765-l8spt" [2770a7ca-b59f-4393-a8b8-a0380a26fc3c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 16:45:37.375368   16774 system_pods.go:89] "snapshot-controller-56fcc65765-lzk5g" [2fa9904e-14d6-4369-8c43-740334b4055f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 16:45:37.375372   16774 system_pods.go:89] "storage-provisioner" [a34eadc8-0330-4959-afc1-2093e6fc6774] Running
	I0920 16:45:37.375379   16774 system_pods.go:126] duration metric: took 206.594432ms to wait for k8s-apps to be running ...
	I0920 16:45:37.375389   16774 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 16:45:37.375441   16774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 16:45:37.389864   16774 system_svc.go:56] duration metric: took 14.467151ms WaitForService to wait for kubelet
	I0920 16:45:37.389896   16774 kubeadm.go:582] duration metric: took 42.836065711s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 16:45:37.389918   16774 node_conditions.go:102] verifying NodePressure condition ...
	I0920 16:45:37.564272   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:37.569494   16774 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0920 16:45:37.569523   16774 node_conditions.go:123] node cpu capacity is 8
	I0920 16:45:37.569540   16774 node_conditions.go:105] duration metric: took 179.615518ms to run NodePressure ...
	I0920 16:45:37.569554   16774 start.go:241] waiting for startup goroutines ...
	I0920 16:45:37.856861   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:38.064450   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:38.356139   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:38.563871   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:38.857137   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:39.064868   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:39.356944   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:39.564081   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:39.856834   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:40.063985   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:40.360725   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:40.563516   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:40.857175   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:41.064245   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:41.355868   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:41.563484   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:41.857264   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:42.063933   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:42.357349   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:42.563534   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:42.857646   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:43.064229   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:43.356856   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:43.563921   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:43.857009   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:44.063159   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:44.356843   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:44.563236   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:44.857659   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:45.064360   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:45.357196   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:45.563704   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:45.857110   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:46.063314   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:46.356126   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:46.562909   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:46.857170   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:47.064229   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:47.357575   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:47.563900   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:47.856747   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:48.064600   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:48.357248   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:48.565106   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:48.857239   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:49.063572   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:49.357361   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:49.563239   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:49.857557   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:50.064189   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:50.357046   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:50.563470   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:50.856233   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:51.064325   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:51.356631   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:51.563585   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:51.857153   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:52.064489   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:52.357283   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:52.602223   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:52.856882   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:53.063295   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:53.356609   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:53.563057   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:53.856854   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:54.063933   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:54.356866   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:54.563271   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:54.856922   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:55.063761   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:55.432327   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:55.564284   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:55.856546   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:56.064114   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:56.356883   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:56.563642   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:56.857469   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:57.064693   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:57.357560   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:57.564010   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:57.856660   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:58.064180   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:58.356652   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:58.564409   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:58.856794   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:59.064278   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:59.356861   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:59.564396   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:59.857220   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:00.063551   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:00.355824   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:00.564168   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:00.856946   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:01.064190   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:01.357506   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:01.563279   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:01.857086   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:02.063556   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:02.357325   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:02.564442   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:02.856912   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:03.064071   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:03.356081   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:03.563801   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:03.857188   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:04.063652   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:04.357421   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:04.563533   16774 kapi.go:107] duration metric: took 56.504381183s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0920 16:46:04.856764   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:05.356543   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:05.856913   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:06.357045   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:06.858653   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:07.356456   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:07.856432   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:08.356846   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:08.856690   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:09.357334   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:09.857425   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:10.357427   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:10.857270   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:11.356682   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:11.857025   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:12.357053   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:12.857543   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:13.427837   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:13.857348   16774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:14.356559   16774 kapi.go:107] duration metric: took 1m8.003907962s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0920 16:46:33.188098   16774 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0920 16:46:33.188121   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:33.687533   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:34.187496   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:34.687770   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:35.187960   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:35.686566   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:36.187700   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:36.687760   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:37.187938   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:37.686881   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:38.187951   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:38.687806   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:39.187937   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:39.686583   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:40.187144   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:40.687153   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:41.187121   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:41.686752   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:42.188749   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:42.686493   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:43.187703   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:43.687582   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:44.187423   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:44.687257   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:45.188066   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:45.687827   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:46.187552   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:46.687074   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:47.187186   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:47.686780   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:48.187783   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:48.686597   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:49.188086   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:49.687089   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:50.186922   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:50.688773   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:51.187532   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:51.687727   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:52.188074   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:52.687066   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:53.187303   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:53.687233   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:54.187109   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:54.687205   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:55.187211   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:55.687160   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:56.186859   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:56.687710   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:57.188023   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:57.686931   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:58.186770   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:58.687853   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:59.187816   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:59.687299   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:00.187825   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:00.695714   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:01.186802   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:01.687963   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:02.188150   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:02.687378   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:03.187394   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:03.689232   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:04.187384   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:04.687313   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:05.187578   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:05.687359   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:06.187163   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:06.687111   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:07.188080   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:07.687987   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:08.186887   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:08.687025   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:09.186720   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:09.687638   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:10.188103   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:10.688066   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:11.186930   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:11.687577   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:12.188172   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:12.687570   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:13.187259   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:13.687266   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:14.187289   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:14.687099   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:15.187032   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:15.687033   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:16.186824   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:16.687722   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:17.188038   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:17.687803   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:18.187752   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:18.686889   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:19.187660   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:19.687175   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:20.186890   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:20.687981   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:21.187702   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:21.687810   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:22.187884   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:22.688201   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:23.187041   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:23.687248   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:24.186939   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:24.686956   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:25.187944   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:25.687856   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:26.187846   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:26.687488   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:27.187845   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:27.687538   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:28.187500   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:28.687612   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:29.187430   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:29.686537   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:30.187861   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:30.687371   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:31.187045   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:31.686941   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:32.187333   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:32.686783   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:33.187789   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:33.686770   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:34.187699   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:34.687643   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:35.187599   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:35.687704   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:36.187648   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:36.687460   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:37.187563   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:37.687288   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:38.187131   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:38.687439   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:39.187058   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:39.687594   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:40.187849   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:40.686756   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:41.187849   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:41.687848   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:42.188377   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:42.687446   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:43.188308   16774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:43.687617   16774 kapi.go:107] duration metric: took 2m34.003590742s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0920 16:47:43.689263   16774 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-205029 cluster.
	I0920 16:47:43.691115   16774 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0920 16:47:43.692517   16774 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0920 16:47:43.694024   16774 out.go:177] * Enabled addons: cloud-spanner, default-storageclass, storage-provisioner-rancher, volcano, storage-provisioner, nvidia-device-plugin, ingress-dns, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0920 16:47:43.695362   16774 addons.go:510] duration metric: took 2m49.141592586s for enable addons: enabled=[cloud-spanner default-storageclass storage-provisioner-rancher volcano storage-provisioner nvidia-device-plugin ingress-dns metrics-server inspektor-gadget yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0920 16:47:43.695416   16774 start.go:246] waiting for cluster config update ...
	I0920 16:47:43.695443   16774 start.go:255] writing updated cluster config ...
	I0920 16:47:43.695724   16774 ssh_runner.go:195] Run: rm -f paused
	I0920 16:47:43.744662   16774 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 16:47:43.746645   16774 out.go:177] * Done! kubectl is now configured to use "addons-205029" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 20 16:57:15 addons-205029 cri-dockerd[1604]: time="2024-09-20T16:57:15Z" level=info msg="Stop pulling image docker.io/nginx:alpine: Status: Downloaded newer image for nginx:alpine"
	Sep 20 16:57:19 addons-205029 dockerd[1338]: time="2024-09-20T16:57:19.200217406Z" level=info msg="Container failed to exit within 30s of signal 15 - using the force" container=8b41ac1c172cbd6d1e887c082d7b84137325526cb5404a5ff4e2c9b46ec92693 spanID=5331617768338407 traceID=12f7e2a72b18a0df9f11c1c0b50664aa
	Sep 20 16:57:19 addons-205029 dockerd[1338]: time="2024-09-20T16:57:19.221749663Z" level=info msg="ignoring event" container=8b41ac1c172cbd6d1e887c082d7b84137325526cb5404a5ff4e2c9b46ec92693 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 16:57:19 addons-205029 dockerd[1338]: time="2024-09-20T16:57:19.341749999Z" level=info msg="ignoring event" container=fcf8b6c5426f938727b077259b0b93dafcaefc6f0ff3c3dc7f40c95b197a696b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 16:57:19 addons-205029 dockerd[1338]: time="2024-09-20T16:57:19.756357771Z" level=info msg="ignoring event" container=9fce6a32f2f45be74aafce36b1ed3ee8caa42aa0595637a0f016f44bd54ef68a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 16:57:19 addons-205029 dockerd[1338]: time="2024-09-20T16:57:19.757278420Z" level=info msg="ignoring event" container=ed3f6bf61f9d7d7c536be1e57d9af03b1d7d5b6560f1a245ab2fc9ae52ff778f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 16:57:19 addons-205029 dockerd[1338]: time="2024-09-20T16:57:19.899339249Z" level=info msg="ignoring event" container=792ba9be9bd8fd820a90f1e42908a37e550fbde59dc1aad493e1393f62dc08d8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 16:57:19 addons-205029 dockerd[1338]: time="2024-09-20T16:57:19.941417965Z" level=info msg="ignoring event" container=956fbc6feda09d3046bc0a2d5ab69bc273d5ed15ecf4a9e887ff9a57ef020d28 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 16:57:20 addons-205029 dockerd[1338]: time="2024-09-20T16:57:20.280887678Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=02bccbf18e05720d traceID=796d04e38cd35e776d24af1aee8a7830
	Sep 20 16:57:20 addons-205029 dockerd[1338]: time="2024-09-20T16:57:20.283232320Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=02bccbf18e05720d traceID=796d04e38cd35e776d24af1aee8a7830
	Sep 20 16:57:23 addons-205029 cri-dockerd[1604]: time="2024-09-20T16:57:23Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/988c0c3c01b8dec5c02d6845445cc43712d5e7a30eb38fc37b7a1c4f228d320c/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 20 16:57:24 addons-205029 dockerd[1338]: time="2024-09-20T16:57:24.204344276Z" level=info msg="ignoring event" container=44cb7bd2bdb7e94b0b127aa312b64a7946e3f77225b421b9fbf952e982f83599 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 16:57:24 addons-205029 dockerd[1338]: time="2024-09-20T16:57:24.250644894Z" level=info msg="ignoring event" container=814032f2e100c2accc20910994da34b116ee63d12f61d0e1c1cd5333a1148fe4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 16:57:25 addons-205029 cri-dockerd[1604]: time="2024-09-20T16:57:25Z" level=info msg="Stop pulling image docker.io/kicbase/echo-server:1.0: Status: Downloaded newer image for kicbase/echo-server:1.0"
	Sep 20 16:57:28 addons-205029 dockerd[1338]: time="2024-09-20T16:57:28.170705177Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=b278379f0b9377b06ee1f701f0f7e379dfdb2a0d14fc91c2f713aa3d4d692de3 spanID=88fd4a1ca725db4a traceID=94ce11c16daa9b2673671b0abb87f9b1
	Sep 20 16:57:28 addons-205029 dockerd[1338]: time="2024-09-20T16:57:28.224616834Z" level=info msg="ignoring event" container=b278379f0b9377b06ee1f701f0f7e379dfdb2a0d14fc91c2f713aa3d4d692de3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 16:57:28 addons-205029 dockerd[1338]: time="2024-09-20T16:57:28.362563363Z" level=info msg="ignoring event" container=2057642ad0f9c93fbbbb3e9da32f2d92a0c23179aaa864519f6d63e2ead0faa5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 16:57:29 addons-205029 cri-dockerd[1604]: time="2024-09-20T16:57:29Z" level=error msg="error getting RW layer size for container ID '44cb7bd2bdb7e94b0b127aa312b64a7946e3f77225b421b9fbf952e982f83599': Error response from daemon: No such container: 44cb7bd2bdb7e94b0b127aa312b64a7946e3f77225b421b9fbf952e982f83599"
	Sep 20 16:57:29 addons-205029 cri-dockerd[1604]: time="2024-09-20T16:57:29Z" level=error msg="Set backoffDuration to : 1m0s for container ID '44cb7bd2bdb7e94b0b127aa312b64a7946e3f77225b421b9fbf952e982f83599'"
	Sep 20 16:57:37 addons-205029 dockerd[1338]: time="2024-09-20T16:57:37.147088821Z" level=info msg="ignoring event" container=9d2da6dde0ef36d874416b5e56c01a77648d285675a9f153cef856d00064e58f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 16:57:37 addons-205029 dockerd[1338]: time="2024-09-20T16:57:37.656191639Z" level=info msg="ignoring event" container=a0f94f0a24718148dc0489393e7aea5377a510b08ca21b5fa848daf98bede421 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 16:57:37 addons-205029 dockerd[1338]: time="2024-09-20T16:57:37.727755489Z" level=info msg="ignoring event" container=c20060aa3ed13af6cf27794ae93751298dedebb43f4e90faca7daea0cd145e79 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 16:57:37 addons-205029 dockerd[1338]: time="2024-09-20T16:57:37.807510088Z" level=info msg="ignoring event" container=e6a7d18e663a25a730d4f6a1fd3b40253be8000145625dff5a67c31b3ff8508c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 16:57:37 addons-205029 cri-dockerd[1604]: time="2024-09-20T16:57:37Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"registry-proxy-r58ln_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 20 16:57:37 addons-205029 dockerd[1338]: time="2024-09-20T16:57:37.897505870Z" level=info msg="ignoring event" container=c04164513365621b2371cadffa8cc82b903bfb8592fa006de4896f508ce02c08 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5081cf10fe14c       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                                  13 seconds ago      Running             hello-world-app           0                   988c0c3c01b8d       hello-world-app-55bf9c44b4-tmpmp
	1b7718ec98f92       nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf                                                23 seconds ago      Running             nginx                     0                   37186b004ad20       nginx
	897b1b4e3fa07       a416a98b71e22                                                                                                                49 seconds ago      Exited              helper-pod                0                   20b37dd65d3f4       helper-pod-delete-pvc-d6bd4afe-8bba-4f86-86d7-a230517a8194
	bc4d995d11bcc       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 9 minutes ago       Running             gcp-auth                  0                   9a3c16f48fbd0       gcp-auth-89d5ffd79-p7btr
	5799d09540395       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              patch                     0                   9b42c9a68f759       ingress-nginx-admission-patch-rpgr8
	1cc91738417ed       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              create                    0                   20d86356b5231       ingress-nginx-admission-create-fht9m
	c20060aa3ed13       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367              12 minutes ago      Exited              registry-proxy            0                   c041645133656       registry-proxy-r58ln
	a0f94f0a24718       registry@sha256:ac0192b549007e22998eb74e8d8488dcfe70f1489520c3b144a6047ac5efbe90                                             12 minutes ago      Exited              registry                  0                   e6a7d18e663a2       registry-66c9cd494c-2sstq
	d38f69a74eb18       6e38f40d628db                                                                                                                12 minutes ago      Running             storage-provisioner       0                   8cb88a4f31c25       storage-provisioner
	e22b2be76b742       c69fa2e9cbf5f                                                                                                                12 minutes ago      Running             coredns                   0                   0c3648605f747       coredns-7c65d6cfc9-zsdfb
	82e7a7b780258       60c005f310ff3                                                                                                                12 minutes ago      Running             kube-proxy                0                   09cc583370936       kube-proxy-m6rvs
	3556b0f5ce7c0       2e96e5913fc06                                                                                                                12 minutes ago      Running             etcd                      0                   7dbf2978ee818       etcd-addons-205029
	613a1b8e140bb       9aa1fad941575                                                                                                                12 minutes ago      Running             kube-scheduler            0                   605264de80a0e       kube-scheduler-addons-205029
	f35872d5577c2       6bab7719df100                                                                                                                12 minutes ago      Running             kube-apiserver            0                   25fcc64785e28       kube-apiserver-addons-205029
	7215c72a915c2       175ffd71cce3d                                                                                                                12 minutes ago      Running             kube-controller-manager   0                   52f087565180b       kube-controller-manager-addons-205029
	
	
	==> coredns [e22b2be76b74] <==
	[INFO] 10.244.0.8:40573 - 6500 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00008079s
	[INFO] 10.244.0.8:38463 - 40354 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000072848s
	[INFO] 10.244.0.8:38463 - 65440 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000108458s
	[INFO] 10.244.0.8:48618 - 36252 "AAAA IN registry.kube-system.svc.cluster.local.europe-west1-b.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.004903327s
	[INFO] 10.244.0.8:48618 - 59539 "A IN registry.kube-system.svc.cluster.local.europe-west1-b.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.00487924s
	[INFO] 10.244.0.8:49813 - 17578 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004355539s
	[INFO] 10.244.0.8:49813 - 11438 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005364924s
	[INFO] 10.244.0.8:40461 - 26041 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004535564s
	[INFO] 10.244.0.8:40461 - 42677 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.00616796s
	[INFO] 10.244.0.8:60468 - 35914 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000083808s
	[INFO] 10.244.0.8:60468 - 26950 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000129051s
	[INFO] 10.244.0.25:40388 - 37440 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00027489s
	[INFO] 10.244.0.25:46048 - 5263 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00016114s
	[INFO] 10.244.0.25:52248 - 1253 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00014577s
	[INFO] 10.244.0.25:47006 - 62795 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00020073s
	[INFO] 10.244.0.25:57177 - 26279 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000129635s
	[INFO] 10.244.0.25:34269 - 32671 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00017305s
	[INFO] 10.244.0.25:34520 - 3129 "A IN storage.googleapis.com.europe-west1-b.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.008757732s
	[INFO] 10.244.0.25:43942 - 25288 "AAAA IN storage.googleapis.com.europe-west1-b.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.008829255s
	[INFO] 10.244.0.25:35893 - 48985 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.007424114s
	[INFO] 10.244.0.25:33589 - 27688 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.007640202s
	[INFO] 10.244.0.25:52108 - 25128 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006747925s
	[INFO] 10.244.0.25:32878 - 20082 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007507541s
	[INFO] 10.244.0.25:36973 - 13363 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 140 0.003010406s
	[INFO] 10.244.0.25:37503 - 32130 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 116 0.004152006s
	
	
	==> describe nodes <==
	Name:               addons-205029
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-205029
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1
	                    minikube.k8s.io/name=addons-205029
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T16_44_49_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-205029
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 16:44:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-205029
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 16:57:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 16:57:25 +0000   Fri, 20 Sep 2024 16:44:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 16:57:25 +0000   Fri, 20 Sep 2024 16:44:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 16:57:25 +0000   Fri, 20 Sep 2024 16:44:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 16:57:25 +0000   Fri, 20 Sep 2024 16:44:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-205029
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 7636be5e00d74b5e91ccb5e8ab2cd570
	  System UUID:                f5c8962a-51ca-4e02-8bba-f9cc61977477
	  Boot ID:                    1090cbe7-7e52-40cc-b00d-227cb699fd1e
	  Kernel Version:             5.15.0-1069-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.0
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m14s
	  default                     hello-world-app-55bf9c44b4-tmpmp         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  gcp-auth                    gcp-auth-89d5ffd79-p7btr                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-7c65d6cfc9-zsdfb                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-addons-205029                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-205029             250m (3%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-205029    200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-m6rvs                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-205029             100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   0 (0%)
	  memory             170Mi (0%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 12m   kube-proxy       
	  Normal   Starting                 12m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  12m   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m   kubelet          Node addons-205029 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m   kubelet          Node addons-205029 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m   kubelet          Node addons-205029 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m   node-controller  Node addons-205029 event: Registered Node addons-205029 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e d9 28 2e 82 1c 08 06
	[  +2.232064] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 31 f4 87 1d 47 08 06
	[  +2.880027] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 6f 7b d0 48 22 08 06
	[Sep20 16:46] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d6 60 44 3e a5 82 08 06
	[  +0.065178] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 45 c1 15 3a ff 08 06
	[  +0.014735] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c2 44 75 61 3e 61 08 06
	[  +7.830531] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 02 c5 96 fc 06 0d 08 06
	[  +3.891799] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 fd a0 0f 0c 90 08 06
	[Sep20 16:47] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000002] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 22 67 c5 70 47 a4 08 06
	[  +0.000000] ll header: 00000000: ff ff ff ff ff ff 8e 07 02 01 1c ad 08 06
	[ +28.840848] IPv4: martian source 10.244.0.1 from 10.244.0.25, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f6 9c 63 ec fd b8 08 06
	[  +0.000460] IPv4: martian source 10.244.0.25 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 6a b0 dc e0 b7 f6 08 06
	[Sep20 16:57] IPv4: martian source 10.244.0.35 from 10.244.0.22, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 fd a0 0f 0c 90 08 06
	
	
	==> etcd [3556b0f5ce7c] <==
	{"level":"info","ts":"2024-09-20T16:44:45.270865Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-09-20T16:44:45.270901Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-20T16:44:45.270913Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-20T16:44:45.270927Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-20T16:44:45.272001Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-205029 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-20T16:44:45.272071Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T16:44:45.272178Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T16:44:45.272256Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T16:44:45.272297Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T16:44:45.272320Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T16:44:45.273048Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T16:44:45.273118Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T16:44:45.273139Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T16:44:45.273385Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T16:44:45.273500Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T16:44:45.274299Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-20T16:44:45.274516Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-20T16:45:13.515320Z","caller":"traceutil/trace.go:171","msg":"trace[23916195] linearizableReadLoop","detail":"{readStateIndex:961; appliedIndex:960; }","duration":"160.089812ms","start":"2024-09-20T16:45:13.355210Z","end":"2024-09-20T16:45:13.515300Z","steps":["trace[23916195] 'read index received'  (duration: 96.016148ms)","trace[23916195] 'applied index is now lower than readState.Index'  (duration: 64.072732ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-20T16:45:13.515459Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"160.312532ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T16:45:13.515529Z","caller":"traceutil/trace.go:171","msg":"trace[746983478] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:940; }","duration":"160.3997ms","start":"2024-09-20T16:45:13.355118Z","end":"2024-09-20T16:45:13.515518Z","steps":["trace[746983478] 'agreement among raft nodes before linearized reading'  (duration: 160.289271ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T16:45:13.515539Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.107782ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/gcp-auth/gcp-auth-89d5ffd79.17f701923977c502\" ","response":"range_response_count:1 size:927"}
	{"level":"info","ts":"2024-09-20T16:45:13.515567Z","caller":"traceutil/trace.go:171","msg":"trace[1285220694] range","detail":"{range_begin:/registry/events/gcp-auth/gcp-auth-89d5ffd79.17f701923977c502; range_end:; response_count:1; response_revision:940; }","duration":"112.137698ms","start":"2024-09-20T16:45:13.403419Z","end":"2024-09-20T16:45:13.515557Z","steps":["trace[1285220694] 'agreement among raft nodes before linearized reading'  (duration: 112.028858ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T16:54:45.382856Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1847}
	{"level":"info","ts":"2024-09-20T16:54:45.405678Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1847,"took":"22.278866ms","hash":435374599,"current-db-size-bytes":8638464,"current-db-size":"8.6 MB","current-db-size-in-use-bytes":4804608,"current-db-size-in-use":"4.8 MB"}
	{"level":"info","ts":"2024-09-20T16:54:45.405721Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":435374599,"revision":1847,"compact-revision":-1}
	
	
	==> gcp-auth [bc4d995d11bc] <==
	2024/09/20 16:48:24 Ready to write response ...
	2024/09/20 16:48:24 Ready to marshal response ...
	2024/09/20 16:48:24 Ready to write response ...
	2024/09/20 16:56:34 Ready to marshal response ...
	2024/09/20 16:56:34 Ready to write response ...
	2024/09/20 16:56:37 Ready to marshal response ...
	2024/09/20 16:56:37 Ready to write response ...
	2024/09/20 16:56:37 Ready to marshal response ...
	2024/09/20 16:56:37 Ready to write response ...
	2024/09/20 16:56:37 Ready to marshal response ...
	2024/09/20 16:56:37 Ready to write response ...
	2024/09/20 16:56:44 Ready to marshal response ...
	2024/09/20 16:56:44 Ready to write response ...
	2024/09/20 16:56:44 Ready to marshal response ...
	2024/09/20 16:56:44 Ready to write response ...
	2024/09/20 16:56:44 Ready to marshal response ...
	2024/09/20 16:56:44 Ready to write response ...
	2024/09/20 16:56:48 Ready to marshal response ...
	2024/09/20 16:56:48 Ready to write response ...
	2024/09/20 16:57:03 Ready to marshal response ...
	2024/09/20 16:57:03 Ready to write response ...
	2024/09/20 16:57:11 Ready to marshal response ...
	2024/09/20 16:57:11 Ready to write response ...
	2024/09/20 16:57:23 Ready to marshal response ...
	2024/09/20 16:57:23 Ready to write response ...
	
	
	==> kernel <==
	 16:57:38 up 40 min,  0 users,  load average: 0.34, 0.44, 0.49
	Linux addons-205029 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [f35872d5577c] <==
	W0920 16:48:16.444571       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0920 16:48:16.467947       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0920 16:48:16.552976       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0920 16:48:16.848687       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0920 16:48:17.207394       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0920 16:56:42.788647       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0920 16:56:43.277083       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0920 16:56:44.120076       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.96.91.39"}
	E0920 16:57:04.395270       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0920 16:57:06.374840       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0920 16:57:07.389598       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0920 16:57:11.838527       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0920 16:57:12.053349       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.106.158.169"}
	I0920 16:57:19.358106       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 16:57:19.358163       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 16:57:19.380116       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 16:57:19.380170       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 16:57:19.451567       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 16:57:19.451620       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 16:57:19.553167       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 16:57:19.553213       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0920 16:57:20.380456       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0920 16:57:20.554037       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0920 16:57:20.562322       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0920 16:57:23.561300       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.106.153.45"}
	
	
	==> kube-controller-manager [7215c72a915c] <==
	I0920 16:57:25.151411       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	I0920 16:57:25.188001       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-205029"
	W0920 16:57:26.391160       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 16:57:26.391196       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 16:57:26.699728       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="5.21819ms"
	I0920 16:57:26.700134       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="49.35µs"
	W0920 16:57:27.452057       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 16:57:27.452096       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 16:57:27.772024       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 16:57:27.772061       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 16:57:28.300181       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 16:57:28.300218       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 16:57:30.514674       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 16:57:30.514720       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 16:57:32.906554       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 16:57:32.906596       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 16:57:34.633226       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 16:57:34.633264       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 16:57:35.215649       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	I0920 16:57:36.841342       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="local-path-storage"
	I0920 16:57:37.591817       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="19.096µs"
	W0920 16:57:37.644133       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 16:57:37.644183       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 16:57:37.834094       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 16:57:37.834140       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [82e7a7b78025] <==
	I0920 16:44:57.451996       1 server_linux.go:66] "Using iptables proxy"
	I0920 16:44:58.053162       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0920 16:44:58.053279       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 16:44:58.260214       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0920 16:44:58.260274       1 server_linux.go:169] "Using iptables Proxier"
	I0920 16:44:58.360455       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 16:44:58.360968       1 server.go:483] "Version info" version="v1.31.1"
	I0920 16:44:58.360994       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 16:44:58.363088       1 config.go:199] "Starting service config controller"
	I0920 16:44:58.363121       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 16:44:58.363158       1 config.go:105] "Starting endpoint slice config controller"
	I0920 16:44:58.363165       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 16:44:58.363745       1 config.go:328] "Starting node config controller"
	I0920 16:44:58.363754       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 16:44:58.463732       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 16:44:58.463787       1 shared_informer.go:320] Caches are synced for node config
	I0920 16:44:58.464383       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [613a1b8e140b] <==
	E0920 16:44:46.864114       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0920 16:44:46.864654       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:46.864693       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0920 16:44:46.864711       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:46.865071       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0920 16:44:46.865099       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:47.676557       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0920 16:44:47.676596       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:47.700046       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0920 16:44:47.700082       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:47.779872       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0920 16:44:47.779918       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:47.786207       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 16:44:47.786261       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:47.859057       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 16:44:47.859105       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0920 16:44:47.875247       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0920 16:44:47.875287       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:47.883636       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0920 16:44:47.883675       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:47.884304       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 16:44:47.884342       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:48.036828       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 16:44:48.036864       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0920 16:44:51.060280       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 16:57:28 addons-205029 kubelet[2429]: I0920 16:57:28.596820    2429 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x67ds\" (UniqueName: \"kubernetes.io/projected/27679691-b05b-4349-adcc-503ae9858cbb-kube-api-access-x67ds\") pod \"27679691-b05b-4349-adcc-503ae9858cbb\" (UID: \"27679691-b05b-4349-adcc-503ae9858cbb\") "
	Sep 20 16:57:28 addons-205029 kubelet[2429]: I0920 16:57:28.596890    2429 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/27679691-b05b-4349-adcc-503ae9858cbb-webhook-cert\") pod \"27679691-b05b-4349-adcc-503ae9858cbb\" (UID: \"27679691-b05b-4349-adcc-503ae9858cbb\") "
	Sep 20 16:57:28 addons-205029 kubelet[2429]: I0920 16:57:28.598817    2429 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27679691-b05b-4349-adcc-503ae9858cbb-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "27679691-b05b-4349-adcc-503ae9858cbb" (UID: "27679691-b05b-4349-adcc-503ae9858cbb"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 20 16:57:28 addons-205029 kubelet[2429]: I0920 16:57:28.598991    2429 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27679691-b05b-4349-adcc-503ae9858cbb-kube-api-access-x67ds" (OuterVolumeSpecName: "kube-api-access-x67ds") pod "27679691-b05b-4349-adcc-503ae9858cbb" (UID: "27679691-b05b-4349-adcc-503ae9858cbb"). InnerVolumeSpecName "kube-api-access-x67ds". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 16:57:28 addons-205029 kubelet[2429]: I0920 16:57:28.698059    2429 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-x67ds\" (UniqueName: \"kubernetes.io/projected/27679691-b05b-4349-adcc-503ae9858cbb-kube-api-access-x67ds\") on node \"addons-205029\" DevicePath \"\""
	Sep 20 16:57:28 addons-205029 kubelet[2429]: I0920 16:57:28.698098    2429 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/27679691-b05b-4349-adcc-503ae9858cbb-webhook-cert\") on node \"addons-205029\" DevicePath \"\""
	Sep 20 16:57:28 addons-205029 kubelet[2429]: I0920 16:57:28.711772    2429 scope.go:117] "RemoveContainer" containerID="b278379f0b9377b06ee1f701f0f7e379dfdb2a0d14fc91c2f713aa3d4d692de3"
	Sep 20 16:57:28 addons-205029 kubelet[2429]: I0920 16:57:28.726078    2429 scope.go:117] "RemoveContainer" containerID="b278379f0b9377b06ee1f701f0f7e379dfdb2a0d14fc91c2f713aa3d4d692de3"
	Sep 20 16:57:28 addons-205029 kubelet[2429]: E0920 16:57:28.726802    2429 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: b278379f0b9377b06ee1f701f0f7e379dfdb2a0d14fc91c2f713aa3d4d692de3" containerID="b278379f0b9377b06ee1f701f0f7e379dfdb2a0d14fc91c2f713aa3d4d692de3"
	Sep 20 16:57:28 addons-205029 kubelet[2429]: I0920 16:57:28.726841    2429 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"b278379f0b9377b06ee1f701f0f7e379dfdb2a0d14fc91c2f713aa3d4d692de3"} err="failed to get container status \"b278379f0b9377b06ee1f701f0f7e379dfdb2a0d14fc91c2f713aa3d4d692de3\": rpc error: code = Unknown desc = Error response from daemon: No such container: b278379f0b9377b06ee1f701f0f7e379dfdb2a0d14fc91c2f713aa3d4d692de3"
	Sep 20 16:57:29 addons-205029 kubelet[2429]: I0920 16:57:29.170060    2429 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27679691-b05b-4349-adcc-503ae9858cbb" path="/var/lib/kubelet/pods/27679691-b05b-4349-adcc-503ae9858cbb/volumes"
	Sep 20 16:57:31 addons-205029 kubelet[2429]: E0920 16:57:31.168168    2429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="28df48e5-8914-4ad7-9aa6-f963fe3d9246"
	Sep 20 16:57:33 addons-205029 kubelet[2429]: E0920 16:57:33.165014    2429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="c8c28c68-13a2-465a-8862-d35552e16a2d"
	Sep 20 16:57:37 addons-205029 kubelet[2429]: I0920 16:57:37.350307    2429 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nb6pp\" (UniqueName: \"kubernetes.io/projected/28df48e5-8914-4ad7-9aa6-f963fe3d9246-kube-api-access-nb6pp\") pod \"28df48e5-8914-4ad7-9aa6-f963fe3d9246\" (UID: \"28df48e5-8914-4ad7-9aa6-f963fe3d9246\") "
	Sep 20 16:57:37 addons-205029 kubelet[2429]: I0920 16:57:37.350375    2429 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/28df48e5-8914-4ad7-9aa6-f963fe3d9246-gcp-creds\") pod \"28df48e5-8914-4ad7-9aa6-f963fe3d9246\" (UID: \"28df48e5-8914-4ad7-9aa6-f963fe3d9246\") "
	Sep 20 16:57:37 addons-205029 kubelet[2429]: I0920 16:57:37.351075    2429 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28df48e5-8914-4ad7-9aa6-f963fe3d9246-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "28df48e5-8914-4ad7-9aa6-f963fe3d9246" (UID: "28df48e5-8914-4ad7-9aa6-f963fe3d9246"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 20 16:57:37 addons-205029 kubelet[2429]: I0920 16:57:37.352964    2429 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28df48e5-8914-4ad7-9aa6-f963fe3d9246-kube-api-access-nb6pp" (OuterVolumeSpecName: "kube-api-access-nb6pp") pod "28df48e5-8914-4ad7-9aa6-f963fe3d9246" (UID: "28df48e5-8914-4ad7-9aa6-f963fe3d9246"). InnerVolumeSpecName "kube-api-access-nb6pp". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 16:57:37 addons-205029 kubelet[2429]: I0920 16:57:37.451162    2429 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/28df48e5-8914-4ad7-9aa6-f963fe3d9246-gcp-creds\") on node \"addons-205029\" DevicePath \"\""
	Sep 20 16:57:37 addons-205029 kubelet[2429]: I0920 16:57:37.451193    2429 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-nb6pp\" (UniqueName: \"kubernetes.io/projected/28df48e5-8914-4ad7-9aa6-f963fe3d9246-kube-api-access-nb6pp\") on node \"addons-205029\" DevicePath \"\""
	Sep 20 16:57:37 addons-205029 kubelet[2429]: I0920 16:57:37.954476    2429 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5w5t\" (UniqueName: \"kubernetes.io/projected/67cce838-d446-44f8-90cb-4b7c286fcfcb-kube-api-access-r5w5t\") pod \"67cce838-d446-44f8-90cb-4b7c286fcfcb\" (UID: \"67cce838-d446-44f8-90cb-4b7c286fcfcb\") "
	Sep 20 16:57:37 addons-205029 kubelet[2429]: I0920 16:57:37.954535    2429 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khs5m\" (UniqueName: \"kubernetes.io/projected/243fbbcd-f60b-492a-ab03-a7425f4bce3b-kube-api-access-khs5m\") pod \"243fbbcd-f60b-492a-ab03-a7425f4bce3b\" (UID: \"243fbbcd-f60b-492a-ab03-a7425f4bce3b\") "
	Sep 20 16:57:37 addons-205029 kubelet[2429]: I0920 16:57:37.957569    2429 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67cce838-d446-44f8-90cb-4b7c286fcfcb-kube-api-access-r5w5t" (OuterVolumeSpecName: "kube-api-access-r5w5t") pod "67cce838-d446-44f8-90cb-4b7c286fcfcb" (UID: "67cce838-d446-44f8-90cb-4b7c286fcfcb"). InnerVolumeSpecName "kube-api-access-r5w5t". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 16:57:37 addons-205029 kubelet[2429]: I0920 16:57:37.957737    2429 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/243fbbcd-f60b-492a-ab03-a7425f4bce3b-kube-api-access-khs5m" (OuterVolumeSpecName: "kube-api-access-khs5m") pod "243fbbcd-f60b-492a-ab03-a7425f4bce3b" (UID: "243fbbcd-f60b-492a-ab03-a7425f4bce3b"). InnerVolumeSpecName "kube-api-access-khs5m". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 16:57:38 addons-205029 kubelet[2429]: I0920 16:57:38.055062    2429 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-r5w5t\" (UniqueName: \"kubernetes.io/projected/67cce838-d446-44f8-90cb-4b7c286fcfcb-kube-api-access-r5w5t\") on node \"addons-205029\" DevicePath \"\""
	Sep 20 16:57:38 addons-205029 kubelet[2429]: I0920 16:57:38.055100    2429 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-khs5m\" (UniqueName: \"kubernetes.io/projected/243fbbcd-f60b-492a-ab03-a7425f4bce3b-kube-api-access-khs5m\") on node \"addons-205029\" DevicePath \"\""
	
	
	==> storage-provisioner [d38f69a74eb1] <==
	I0920 16:45:02.364685       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 16:45:02.452457       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 16:45:02.452594       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 16:45:02.545268       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 16:45:02.545521       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-205029_9d42ca77-fe74-48dc-9687-29c7e7fa26f2!
	I0920 16:45:02.546684       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"25331077-7087-4405-a476-a7c45133fe38", APIVersion:"v1", ResourceVersion:"592", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-205029_9d42ca77-fe74-48dc-9687-29c7e7fa26f2 became leader
	I0920 16:45:02.646435       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-205029_9d42ca77-fe74-48dc-9687-29c7e7fa26f2!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-205029 -n addons-205029
helpers_test.go:261: (dbg) Run:  kubectl --context addons-205029 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-205029 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-205029 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-205029/192.168.49.2
	Start Time:       Fri, 20 Sep 2024 16:48:24 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-75jnd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-75jnd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m15s                  default-scheduler  Successfully assigned default/busybox to addons-205029
	  Warning  Failed     7m56s (x6 over 9m13s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    7m44s (x4 over 9m14s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m44s (x4 over 9m14s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m44s (x4 over 9m14s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    4m7s (x22 over 9m13s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (72.50s)

                                                
                                    
x
+
TestKubernetesUpgrade (342.86s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-873587 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0920 17:30:28.134530   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/functional-796375/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-873587 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (42.105402805s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-873587
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-873587: (13.504368046s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-873587 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-873587 status --format={{.Host}}: exit status 7 (82.843378ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-873587 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-873587 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m30.141641181s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-873587 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-873587 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-873587 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker: exit status 106 (86.368789ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-873587] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19672
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19672-8616/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8616/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-873587
	    minikube start -p kubernetes-upgrade-873587 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8735872 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-873587 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-873587 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0920 17:35:46.832758   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:275: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-873587 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: exit status 90 (13.288755713s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-873587] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19672
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19672-8616/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8616/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "kubernetes-upgrade-873587" primary control-plane node in "kubernetes-upgrade-873587" cluster
	* Pulling base image v0.0.45-1726784731-19672 ...
	* Updating the running docker "kubernetes-upgrade-873587" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 17:35:40.654083  396597 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:35:40.654484  396597 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:35:40.654524  396597 out.go:358] Setting ErrFile to fd 2...
	I0920 17:35:40.654541  396597 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:35:40.654861  396597 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8616/.minikube/bin
	I0920 17:35:40.655873  396597 out.go:352] Setting JSON to false
	I0920 17:35:40.657891  396597 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":4685,"bootTime":1726849056,"procs":402,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 17:35:40.658070  396597 start.go:139] virtualization: kvm guest
	I0920 17:35:40.660171  396597 out.go:177] * [kubernetes-upgrade-873587] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 17:35:40.661714  396597 notify.go:220] Checking for updates...
	I0920 17:35:40.662910  396597 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 17:35:40.664344  396597 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 17:35:40.665976  396597 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-8616/kubeconfig
	I0920 17:35:40.667716  396597 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8616/.minikube
	I0920 17:35:40.669529  396597 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 17:35:40.671204  396597 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 17:35:40.674323  396597 config.go:182] Loaded profile config "kubernetes-upgrade-873587": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 17:35:40.675087  396597 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 17:35:40.703897  396597 docker.go:123] docker version: linux-27.3.0:Docker Engine - Community
	I0920 17:35:40.703978  396597 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 17:35:40.764249  396597 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:true NGoroutines:88 SystemTime:2024-09-20 17:35:40.748622608 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0920 17:35:40.764371  396597 docker.go:318] overlay module found
	I0920 17:35:40.765999  396597 out.go:177] * Using the docker driver based on existing profile
	I0920 17:35:40.767479  396597 start.go:297] selected driver: docker
	I0920 17:35:40.767501  396597 start.go:901] validating driver "docker" against &{Name:kubernetes-upgrade-873587 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-873587 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:35:40.767639  396597 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 17:35:40.768818  396597 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 17:35:40.819931  396597 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:true NGoroutines:88 SystemTime:2024-09-20 17:35:40.810618549 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0920 17:35:40.820264  396597 cni.go:84] Creating CNI manager for ""
	I0920 17:35:40.820319  396597 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 17:35:40.820364  396597 start.go:340] cluster config:
	{Name:kubernetes-upgrade-873587 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-873587 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:35:40.822545  396597 out.go:177] * Starting "kubernetes-upgrade-873587" primary control-plane node in "kubernetes-upgrade-873587" cluster
	I0920 17:35:40.823856  396597 cache.go:121] Beginning downloading kic base image for docker with docker
	I0920 17:35:40.825231  396597 out.go:177] * Pulling base image v0.0.45-1726784731-19672 ...
	I0920 17:35:40.826860  396597 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 17:35:40.826898  396597 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0920 17:35:40.826908  396597 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-8616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0920 17:35:40.826916  396597 cache.go:56] Caching tarball of preloaded images
	I0920 17:35:40.827173  396597 preload.go:172] Found /home/jenkins/minikube-integration/19672-8616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0920 17:35:40.827202  396597 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 17:35:40.827325  396597 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/kubernetes-upgrade-873587/config.json ...
	W0920 17:35:40.851968  396597 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed is of wrong architecture
	I0920 17:35:40.851989  396597 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0920 17:35:40.852060  396597 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory
	I0920 17:35:40.852072  396597 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory, skipping pull
	I0920 17:35:40.852079  396597 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed exists in cache, skipping pull
	I0920 17:35:40.852085  396597 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed as a tarball
	I0920 17:35:40.852090  396597 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from local cache
	I0920 17:35:40.904264  396597 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from cached tarball
	I0920 17:35:40.904307  396597 cache.go:194] Successfully downloaded all kic artifacts
	I0920 17:35:40.904348  396597 start.go:360] acquireMachinesLock for kubernetes-upgrade-873587: {Name:mk222a66497da7390cc984d4040983c15d2591de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 17:35:40.904414  396597 start.go:364] duration metric: took 43.333µs to acquireMachinesLock for "kubernetes-upgrade-873587"
	I0920 17:35:40.904431  396597 start.go:96] Skipping create...Using existing machine configuration
	I0920 17:35:40.904439  396597 fix.go:54] fixHost starting: 
	I0920 17:35:40.904667  396597 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-873587 --format={{.State.Status}}
	I0920 17:35:40.923685  396597 fix.go:112] recreateIfNeeded on kubernetes-upgrade-873587: state=Running err=<nil>
	W0920 17:35:40.923723  396597 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 17:35:40.926402  396597 out.go:177] * Updating the running docker "kubernetes-upgrade-873587" container ...
	I0920 17:35:40.927737  396597 machine.go:93] provisionDockerMachine start ...
	I0920 17:35:40.927812  396597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-873587
	I0920 17:35:40.944357  396597 main.go:141] libmachine: Using SSH client type: native
	I0920 17:35:40.944657  396597 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 33039 <nil> <nil>}
	I0920 17:35:40.944679  396597 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 17:35:41.086883  396597 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-873587
	
	I0920 17:35:41.086963  396597 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-873587"
	I0920 17:35:41.087103  396597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-873587
	I0920 17:35:41.105674  396597 main.go:141] libmachine: Using SSH client type: native
	I0920 17:35:41.105852  396597 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 33039 <nil> <nil>}
	I0920 17:35:41.105862  396597 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-873587 && echo "kubernetes-upgrade-873587" | sudo tee /etc/hostname
	I0920 17:35:41.266911  396597 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-873587
	
	I0920 17:35:41.267020  396597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-873587
	I0920 17:35:41.290700  396597 main.go:141] libmachine: Using SSH client type: native
	I0920 17:35:41.290944  396597 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 33039 <nil> <nil>}
	I0920 17:35:41.290966  396597 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-873587' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-873587/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-873587' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 17:35:41.431066  396597 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 17:35:41.431096  396597 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19672-8616/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-8616/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-8616/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-8616/.minikube}
	I0920 17:35:41.431118  396597 ubuntu.go:177] setting up certificates
	I0920 17:35:41.431131  396597 provision.go:84] configureAuth start
	I0920 17:35:41.431187  396597 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-873587
	I0920 17:35:41.451077  396597 provision.go:143] copyHostCerts
	I0920 17:35:41.451151  396597 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8616/.minikube/cert.pem, removing ...
	I0920 17:35:41.451163  396597 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8616/.minikube/cert.pem
	I0920 17:35:41.451236  396597 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8616/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-8616/.minikube/cert.pem (1123 bytes)
	I0920 17:35:41.451421  396597 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8616/.minikube/key.pem, removing ...
	I0920 17:35:41.451437  396597 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8616/.minikube/key.pem
	I0920 17:35:41.451481  396597 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8616/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-8616/.minikube/key.pem (1679 bytes)
	I0920 17:35:41.451556  396597 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8616/.minikube/ca.pem, removing ...
	I0920 17:35:41.451567  396597 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8616/.minikube/ca.pem
	I0920 17:35:41.451600  396597 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8616/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-8616/.minikube/ca.pem (1082 bytes)
	I0920 17:35:41.451677  396597 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-8616/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-8616/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-8616/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-873587 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-873587 localhost minikube]
	I0920 17:35:41.584969  396597 provision.go:177] copyRemoteCerts
	I0920 17:35:41.585056  396597 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 17:35:41.585110  396597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-873587
	I0920 17:35:41.608577  396597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33039 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/kubernetes-upgrade-873587/id_rsa Username:docker}
	I0920 17:35:41.709264  396597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8616/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 17:35:41.735451  396597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8616/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0920 17:35:41.767666  396597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8616/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 17:35:41.804991  396597 provision.go:87] duration metric: took 373.843691ms to configureAuth
	I0920 17:35:41.805031  396597 ubuntu.go:193] setting minikube options for container-runtime
	I0920 17:35:41.805252  396597 config.go:182] Loaded profile config "kubernetes-upgrade-873587": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 17:35:41.805316  396597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-873587
	I0920 17:35:41.825131  396597 main.go:141] libmachine: Using SSH client type: native
	I0920 17:35:41.825304  396597 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 33039 <nil> <nil>}
	I0920 17:35:41.825315  396597 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0920 17:35:41.960847  396597 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0920 17:35:41.960874  396597 ubuntu.go:71] root file system type: overlay
	I0920 17:35:41.961008  396597 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0920 17:35:41.961076  396597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-873587
	I0920 17:35:41.984303  396597 main.go:141] libmachine: Using SSH client type: native
	I0920 17:35:41.984538  396597 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 33039 <nil> <nil>}
	I0920 17:35:41.984634  396597 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0920 17:35:42.134216  396597 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0920 17:35:42.134287  396597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-873587
	I0920 17:35:42.169215  396597 main.go:141] libmachine: Using SSH client type: native
	I0920 17:35:42.169452  396597 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 33039 <nil> <nil>}
	I0920 17:35:42.169479  396597 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0920 17:35:42.308866  396597 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 17:35:42.308896  396597 machine.go:96] duration metric: took 1.38114105s to provisionDockerMachine
	I0920 17:35:42.308910  396597 start.go:293] postStartSetup for "kubernetes-upgrade-873587" (driver="docker")
	I0920 17:35:42.308923  396597 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 17:35:42.308999  396597 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 17:35:42.309050  396597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-873587
	I0920 17:35:42.331138  396597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33039 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/kubernetes-upgrade-873587/id_rsa Username:docker}
	I0920 17:35:42.432063  396597 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 17:35:42.435238  396597 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0920 17:35:42.435267  396597 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0920 17:35:42.435275  396597 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0920 17:35:42.435282  396597 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0920 17:35:42.435295  396597 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8616/.minikube/addons for local assets ...
	I0920 17:35:42.435351  396597 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8616/.minikube/files for local assets ...
	I0920 17:35:42.435432  396597 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-8616/.minikube/files/etc/ssl/certs/153982.pem -> 153982.pem in /etc/ssl/certs
	I0920 17:35:42.435522  396597 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 17:35:42.443892  396597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8616/.minikube/files/etc/ssl/certs/153982.pem --> /etc/ssl/certs/153982.pem (1708 bytes)
	I0920 17:35:42.467539  396597 start.go:296] duration metric: took 158.612986ms for postStartSetup
	I0920 17:35:42.467648  396597 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 17:35:42.467702  396597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-873587
	I0920 17:35:42.486458  396597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33039 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/kubernetes-upgrade-873587/id_rsa Username:docker}
	I0920 17:35:42.575800  396597 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0920 17:35:42.581049  396597 fix.go:56] duration metric: took 1.676600539s for fixHost
	I0920 17:35:42.581079  396597 start.go:83] releasing machines lock for "kubernetes-upgrade-873587", held for 1.676653276s
	I0920 17:35:42.581153  396597 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-873587
	I0920 17:35:42.599066  396597 ssh_runner.go:195] Run: cat /version.json
	I0920 17:35:42.599114  396597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-873587
	I0920 17:35:42.599141  396597 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 17:35:42.599226  396597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-873587
	I0920 17:35:42.623103  396597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33039 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/kubernetes-upgrade-873587/id_rsa Username:docker}
	I0920 17:35:42.624642  396597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33039 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/kubernetes-upgrade-873587/id_rsa Username:docker}
	I0920 17:35:42.710546  396597 ssh_runner.go:195] Run: systemctl --version
	I0920 17:35:42.791595  396597 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0920 17:35:42.796464  396597 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0920 17:35:42.817745  396597 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0920 17:35:42.817846  396597 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0920 17:35:42.835098  396597 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0920 17:35:42.852196  396597 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 17:35:42.852267  396597 start.go:495] detecting cgroup driver to use...
	I0920 17:35:42.852305  396597 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0920 17:35:42.852421  396597 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 17:35:42.886165  396597 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0920 17:35:42.901176  396597 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0920 17:35:42.913579  396597 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0920 17:35:42.913640  396597 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0920 17:35:42.925128  396597 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 17:35:42.948266  396597 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0920 17:35:42.959039  396597 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 17:35:42.971880  396597 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 17:35:42.981174  396597 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0920 17:35:42.991618  396597 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0920 17:35:43.001728  396597 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0920 17:35:43.012134  396597 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 17:35:43.021091  396597 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 17:35:43.029799  396597 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:35:43.127578  396597 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0920 17:35:53.344787  396597 ssh_runner.go:235] Completed: sudo systemctl restart containerd: (10.217170362s)
	I0920 17:35:53.344820  396597 start.go:495] detecting cgroup driver to use...
	I0920 17:35:53.344854  396597 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0920 17:35:53.344903  396597 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0920 17:35:53.366809  396597 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0920 17:35:53.366898  396597 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0920 17:35:53.385901  396597 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 17:35:53.409473  396597 ssh_runner.go:195] Run: which cri-dockerd
	I0920 17:35:53.413938  396597 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0920 17:35:53.426911  396597 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0920 17:35:53.448660  396597 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0920 17:35:53.564176  396597 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0920 17:35:53.677779  396597 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0920 17:35:53.677915  396597 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0920 17:35:53.702618  396597 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:35:53.794319  396597 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0920 17:35:53.861243  396597 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0920 17:35:53.885383  396597 out.go:201] 
	W0920 17:35:53.887032  396597 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 20 17:31:11 kubernetes-upgrade-873587 systemd[1]: Starting Docker Application Container Engine...
	Sep 20 17:31:11 kubernetes-upgrade-873587 dockerd[387]: time="2024-09-20T17:31:11.586108848Z" level=info msg="Starting up"
	Sep 20 17:31:11 kubernetes-upgrade-873587 dockerd[387]: time="2024-09-20T17:31:11.610407430Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 20 17:31:11 kubernetes-upgrade-873587 dockerd[387]: time="2024-09-20T17:31:11.645230579Z" level=info msg="Loading containers: start."
	Sep 20 17:31:11 kubernetes-upgrade-873587 dockerd[387]: time="2024-09-20T17:31:11.788063737Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 20 17:31:11 kubernetes-upgrade-873587 dockerd[387]: time="2024-09-20T17:31:11.835161696Z" level=info msg="Loading containers: done."
	Sep 20 17:31:11 kubernetes-upgrade-873587 dockerd[387]: time="2024-09-20T17:31:11.844767415Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.0
	Sep 20 17:31:11 kubernetes-upgrade-873587 dockerd[387]: time="2024-09-20T17:31:11.844841510Z" level=info msg="Daemon has completed initialization"
	Sep 20 17:31:11 kubernetes-upgrade-873587 dockerd[387]: time="2024-09-20T17:31:11.867956063Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 20 17:31:11 kubernetes-upgrade-873587 dockerd[387]: time="2024-09-20T17:31:11.867986915Z" level=info msg="API listen on [::]:2376"
	Sep 20 17:31:11 kubernetes-upgrade-873587 systemd[1]: Started Docker Application Container Engine.
	Sep 20 17:31:16 kubernetes-upgrade-873587 systemd[1]: Stopping Docker Application Container Engine...
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[387]: time="2024-09-20T17:31:16.133663753Z" level=info msg="Processing signal 'terminated'"
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[387]: time="2024-09-20T17:31:16.135650660Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[387]: time="2024-09-20T17:31:16.136518013Z" level=info msg="Daemon shutdown complete"
	Sep 20 17:31:16 kubernetes-upgrade-873587 systemd[1]: docker.service: Deactivated successfully.
	Sep 20 17:31:16 kubernetes-upgrade-873587 systemd[1]: Stopped Docker Application Container Engine.
	Sep 20 17:31:16 kubernetes-upgrade-873587 systemd[1]: Starting Docker Application Container Engine...
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[857]: time="2024-09-20T17:31:16.249935226Z" level=info msg="Starting up"
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[857]: time="2024-09-20T17:31:16.277956065Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[857]: time="2024-09-20T17:31:16.293392201Z" level=info msg="Loading containers: start."
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[857]: time="2024-09-20T17:31:16.477370680Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[857]: time="2024-09-20T17:31:16.534303909Z" level=info msg="Loading containers: done."
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[857]: time="2024-09-20T17:31:16.552425213Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.0
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[857]: time="2024-09-20T17:31:16.552495324Z" level=info msg="Daemon has completed initialization"
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[857]: time="2024-09-20T17:31:16.585436148Z" level=info msg="API listen on [::]:2376"
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[857]: time="2024-09-20T17:31:16.585451203Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 20 17:31:16 kubernetes-upgrade-873587 systemd[1]: Started Docker Application Container Engine.
	Sep 20 17:31:16 kubernetes-upgrade-873587 systemd[1]: Stopping Docker Application Container Engine...
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[857]: time="2024-09-20T17:31:16.615196449Z" level=info msg="Processing signal 'terminated'"
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[857]: time="2024-09-20T17:31:16.617081286Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[857]: time="2024-09-20T17:31:16.617981804Z" level=info msg="Daemon shutdown complete"
	Sep 20 17:31:16 kubernetes-upgrade-873587 systemd[1]: docker.service: Deactivated successfully.
	Sep 20 17:31:16 kubernetes-upgrade-873587 systemd[1]: Stopped Docker Application Container Engine.
	Sep 20 17:31:16 kubernetes-upgrade-873587 systemd[1]: Starting Docker Application Container Engine...
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[1121]: time="2024-09-20T17:31:16.661934444Z" level=info msg="Starting up"
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[1121]: time="2024-09-20T17:31:16.682593795Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[1121]: time="2024-09-20T17:31:16.692306244Z" level=info msg="Loading containers: start."
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[1121]: time="2024-09-20T17:31:16.856548271Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[1121]: time="2024-09-20T17:31:16.904845813Z" level=info msg="Loading containers: done."
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[1121]: time="2024-09-20T17:31:16.915555552Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.0
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[1121]: time="2024-09-20T17:31:16.915636752Z" level=info msg="Daemon has completed initialization"
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[1121]: time="2024-09-20T17:31:16.940305096Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[1121]: time="2024-09-20T17:31:16.940391459Z" level=info msg="API listen on [::]:2376"
	Sep 20 17:31:16 kubernetes-upgrade-873587 systemd[1]: Started Docker Application Container Engine.
	Sep 20 17:31:20 kubernetes-upgrade-873587 systemd[1]: Stopping Docker Application Container Engine...
	Sep 20 17:31:20 kubernetes-upgrade-873587 dockerd[1121]: time="2024-09-20T17:31:20.432548352Z" level=info msg="Processing signal 'terminated'"
	Sep 20 17:31:20 kubernetes-upgrade-873587 dockerd[1121]: time="2024-09-20T17:31:20.434493419Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 20 17:31:20 kubernetes-upgrade-873587 dockerd[1121]: time="2024-09-20T17:31:20.435517142Z" level=info msg="Daemon shutdown complete"
	Sep 20 17:31:20 kubernetes-upgrade-873587 systemd[1]: docker.service: Deactivated successfully.
	Sep 20 17:31:20 kubernetes-upgrade-873587 systemd[1]: Stopped Docker Application Container Engine.
	Sep 20 17:31:20 kubernetes-upgrade-873587 systemd[1]: Starting Docker Application Container Engine...
	Sep 20 17:31:20 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:31:20.473750102Z" level=info msg="Starting up"
	Sep 20 17:31:20 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:31:20.493449037Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 20 17:31:22 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:31:22.409244672Z" level=info msg="Loading containers: start."
	Sep 20 17:31:22 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:31:22.555214453Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 20 17:31:22 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:31:22.602541464Z" level=info msg="Loading containers: done."
	Sep 20 17:31:22 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:31:22.615527856Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.0
	Sep 20 17:31:22 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:31:22.615608187Z" level=info msg="Daemon has completed initialization"
	Sep 20 17:31:22 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:31:22.640153450Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 20 17:31:22 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:31:22.640181954Z" level=info msg="API listen on [::]:2376"
	Sep 20 17:31:22 kubernetes-upgrade-873587 systemd[1]: Started Docker Application Container Engine.
	Sep 20 17:31:46 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:31:46.203208646Z" level=info msg="ignoring event" container=9b3506e9460736d419e9fd2c8a84f3792b91975d0922e44d979f91c9e6cca44f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:32:07 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:32:07.219314787Z" level=info msg="ignoring event" container=0aeea47e1f1e38da46be4a4231afc01dc1cfff2e558ee1e8c78c05c1fcf8adb2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:32:07 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:32:07.311487475Z" level=info msg="ignoring event" container=3e7564736f9c93c46738e82fee3e8a789d26c1a14373cc1c8d7492d3e49ac2b1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:32:21 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:32:21.059938641Z" level=info msg="ignoring event" container=3b2d1b08c2677fe2dca2144c3a149081da6df7b39dc38490a90c0483ab51e4dd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:32:48 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:32:48.246624349Z" level=info msg="ignoring event" container=0c79f21d6fa6d7651e792bf84c9464a73cf0d8faabdb8dd2a3c8eee3480a69b8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:32:52 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:32:52.556789831Z" level=info msg="ignoring event" container=5b749582d4a32488f3d8309729bfe1f0e433edbed1c0c30c239c7b57db8ac1e3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:33:29 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:33:29.040742386Z" level=info msg="ignoring event" container=8cd04d9c4d49bc581c950ff8d4dbc3c3ba6bb9300cc332ee52530b4bd5c6f553 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:33:40 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:33:40.066388875Z" level=info msg="ignoring event" container=66942b88616e91d221966b171e0fa00574895213ad102060b99083d4324c3db5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:34:34 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:34:34.060996505Z" level=info msg="ignoring event" container=5e194648c75f65737ba288b0bbc7f09bacefa4c4b0e263b00372d0919723aebe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:34:45 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:34:45.083894610Z" level=info msg="ignoring event" container=3981f33f9e2e649e957cadd5de32da70cd994f103698365889db9018f8acd9ce module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:35:29 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:35:29.317890818Z" level=info msg="ignoring event" container=5109588067a484876b18d1242c09951545cdc70291f3a3624f284e137014e5ae module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:35:29 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:35:29.392518054Z" level=info msg="ignoring event" container=87b581f1a4b6933206e9c039814bee5ce1eebb29fb7fd803a9880d1320bcfd8f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:35:29 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:35:29.462458405Z" level=info msg="ignoring event" container=0cad685cce60f73ca96e1f651b0bac5e5298251054242ec67d8b1f6f6681cb47 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:35:29 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:35:29.530805054Z" level=info msg="ignoring event" container=b0fe5a4e23af646edad8e10114cf9aa43882490150aa6fd103a2c800b2d10dc8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:35:43 kubernetes-upgrade-873587 systemd[1]: Stopping Docker Application Container Engine...
	Sep 20 17:35:43 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:35:43.137925197Z" level=info msg="Processing signal 'terminated'"
	Sep 20 17:35:43 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:35:43.279976137Z" level=info msg="ignoring event" container=ea685766a237359d8ab7c6cc90e5197f4a10b4cad9033244443be60e2187c31a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:35:43 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:35:43.281701681Z" level=info msg="ignoring event" container=e8bfa342fed94df001310d4868f7bb417a69727ba686e3e84bb17eff26225b28 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:35:43 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:35:43.282893565Z" level=info msg="ignoring event" container=a0af9610a890ffd85d6ecb453d4bdfcddca2d4c443b426574a76a6feb3cc912a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:35:43 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:35:43.345205240Z" level=info msg="ignoring event" container=135b53a106c2835bc991807853e20dda6f7fb0cfe40f1c4ac90b7879f6355c91 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:35:43 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:35:43.347684220Z" level=info msg="ignoring event" container=f16fbf0045d50080ca53e7dfb664b4adf3846de39a9a600e714a03a64ad11841 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:35:43 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:35:43.358258284Z" level=info msg="ignoring event" container=fe99147fb2ad3489546bb6554a27ef459e51083af5aa9ccc8d993c44c4d8b279 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:35:43 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:35:43.361793505Z" level=info msg="ignoring event" container=b3a15ff1810f47cd2a2dad82ccc50b1d56f57266ca53c24321e9937d0852b534 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:35:53 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:35:53.173983349Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=53b942e551be140497adea5859a88dcdad4202f125e0bf56afb6335da0047534
	Sep 20 17:35:53 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:35:53.212400680Z" level=info msg="ignoring event" container=53b942e551be140497adea5859a88dcdad4202f125e0bf56afb6335da0047534 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:35:53 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:35:53.243501737Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 20 17:35:53 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:35:53.244555166Z" level=info msg="Daemon shutdown complete"
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: docker.service: Deactivated successfully.
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: Stopped Docker Application Container Engine.
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: Starting Docker Application Container Engine...
	Sep 20 17:35:53 kubernetes-upgrade-873587 dockerd[13007]: invalid TLS configuration: error reading X509 key pair - make sure the key is not encrypted (cert: "/etc/docker/server.pem", key: "/etc/docker/server-key.pem"): tls: private key does not match public key
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: Failed to start Docker Application Container Engine.
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: docker.service: Scheduled restart job, restart counter is at 1.
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: Stopped Docker Application Container Engine.
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: Starting Docker Application Container Engine...
	Sep 20 17:35:53 kubernetes-upgrade-873587 dockerd[13052]: invalid TLS configuration: error reading X509 key pair - make sure the key is not encrypted (cert: "/etc/docker/server.pem", key: "/etc/docker/server-key.pem"): tls: private key does not match public key
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: Failed to start Docker Application Container Engine.
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: Stopped Docker Application Container Engine.
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: Starting Docker Application Container Engine...
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: docker.service: Deactivated successfully.
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: Stopped Docker Application Container Engine.
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: Starting Docker Application Container Engine...
	Sep 20 17:35:53 kubernetes-upgrade-873587 dockerd[13105]: invalid TLS configuration: error reading X509 key pair - make sure the key is not encrypted (cert: "/etc/docker/server.pem", key: "/etc/docker/server-key.pem"): tls: private key does not match public key
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 20 17:31:11 kubernetes-upgrade-873587 systemd[1]: Starting Docker Application Container Engine...
	Sep 20 17:31:11 kubernetes-upgrade-873587 dockerd[387]: time="2024-09-20T17:31:11.586108848Z" level=info msg="Starting up"
	Sep 20 17:31:11 kubernetes-upgrade-873587 dockerd[387]: time="2024-09-20T17:31:11.610407430Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 20 17:31:11 kubernetes-upgrade-873587 dockerd[387]: time="2024-09-20T17:31:11.645230579Z" level=info msg="Loading containers: start."
	Sep 20 17:31:11 kubernetes-upgrade-873587 dockerd[387]: time="2024-09-20T17:31:11.788063737Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 20 17:31:11 kubernetes-upgrade-873587 dockerd[387]: time="2024-09-20T17:31:11.835161696Z" level=info msg="Loading containers: done."
	Sep 20 17:31:11 kubernetes-upgrade-873587 dockerd[387]: time="2024-09-20T17:31:11.844767415Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.0
	Sep 20 17:31:11 kubernetes-upgrade-873587 dockerd[387]: time="2024-09-20T17:31:11.844841510Z" level=info msg="Daemon has completed initialization"
	Sep 20 17:31:11 kubernetes-upgrade-873587 dockerd[387]: time="2024-09-20T17:31:11.867956063Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 20 17:31:11 kubernetes-upgrade-873587 dockerd[387]: time="2024-09-20T17:31:11.867986915Z" level=info msg="API listen on [::]:2376"
	Sep 20 17:31:11 kubernetes-upgrade-873587 systemd[1]: Started Docker Application Container Engine.
	Sep 20 17:31:16 kubernetes-upgrade-873587 systemd[1]: Stopping Docker Application Container Engine...
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[387]: time="2024-09-20T17:31:16.133663753Z" level=info msg="Processing signal 'terminated'"
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[387]: time="2024-09-20T17:31:16.135650660Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[387]: time="2024-09-20T17:31:16.136518013Z" level=info msg="Daemon shutdown complete"
	Sep 20 17:31:16 kubernetes-upgrade-873587 systemd[1]: docker.service: Deactivated successfully.
	Sep 20 17:31:16 kubernetes-upgrade-873587 systemd[1]: Stopped Docker Application Container Engine.
	Sep 20 17:31:16 kubernetes-upgrade-873587 systemd[1]: Starting Docker Application Container Engine...
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[857]: time="2024-09-20T17:31:16.249935226Z" level=info msg="Starting up"
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[857]: time="2024-09-20T17:31:16.277956065Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[857]: time="2024-09-20T17:31:16.293392201Z" level=info msg="Loading containers: start."
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[857]: time="2024-09-20T17:31:16.477370680Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[857]: time="2024-09-20T17:31:16.534303909Z" level=info msg="Loading containers: done."
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[857]: time="2024-09-20T17:31:16.552425213Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.0
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[857]: time="2024-09-20T17:31:16.552495324Z" level=info msg="Daemon has completed initialization"
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[857]: time="2024-09-20T17:31:16.585436148Z" level=info msg="API listen on [::]:2376"
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[857]: time="2024-09-20T17:31:16.585451203Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 20 17:31:16 kubernetes-upgrade-873587 systemd[1]: Started Docker Application Container Engine.
	Sep 20 17:31:16 kubernetes-upgrade-873587 systemd[1]: Stopping Docker Application Container Engine...
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[857]: time="2024-09-20T17:31:16.615196449Z" level=info msg="Processing signal 'terminated'"
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[857]: time="2024-09-20T17:31:16.617081286Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[857]: time="2024-09-20T17:31:16.617981804Z" level=info msg="Daemon shutdown complete"
	Sep 20 17:31:16 kubernetes-upgrade-873587 systemd[1]: docker.service: Deactivated successfully.
	Sep 20 17:31:16 kubernetes-upgrade-873587 systemd[1]: Stopped Docker Application Container Engine.
	Sep 20 17:31:16 kubernetes-upgrade-873587 systemd[1]: Starting Docker Application Container Engine...
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[1121]: time="2024-09-20T17:31:16.661934444Z" level=info msg="Starting up"
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[1121]: time="2024-09-20T17:31:16.682593795Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[1121]: time="2024-09-20T17:31:16.692306244Z" level=info msg="Loading containers: start."
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[1121]: time="2024-09-20T17:31:16.856548271Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[1121]: time="2024-09-20T17:31:16.904845813Z" level=info msg="Loading containers: done."
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[1121]: time="2024-09-20T17:31:16.915555552Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.0
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[1121]: time="2024-09-20T17:31:16.915636752Z" level=info msg="Daemon has completed initialization"
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[1121]: time="2024-09-20T17:31:16.940305096Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[1121]: time="2024-09-20T17:31:16.940391459Z" level=info msg="API listen on [::]:2376"
	Sep 20 17:31:16 kubernetes-upgrade-873587 systemd[1]: Started Docker Application Container Engine.
	Sep 20 17:31:20 kubernetes-upgrade-873587 systemd[1]: Stopping Docker Application Container Engine...
	Sep 20 17:31:20 kubernetes-upgrade-873587 dockerd[1121]: time="2024-09-20T17:31:20.432548352Z" level=info msg="Processing signal 'terminated'"
	Sep 20 17:31:20 kubernetes-upgrade-873587 dockerd[1121]: time="2024-09-20T17:31:20.434493419Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 20 17:31:20 kubernetes-upgrade-873587 dockerd[1121]: time="2024-09-20T17:31:20.435517142Z" level=info msg="Daemon shutdown complete"
	Sep 20 17:31:20 kubernetes-upgrade-873587 systemd[1]: docker.service: Deactivated successfully.
	Sep 20 17:31:20 kubernetes-upgrade-873587 systemd[1]: Stopped Docker Application Container Engine.
	Sep 20 17:31:20 kubernetes-upgrade-873587 systemd[1]: Starting Docker Application Container Engine...
	Sep 20 17:31:20 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:31:20.473750102Z" level=info msg="Starting up"
	Sep 20 17:31:20 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:31:20.493449037Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 20 17:31:22 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:31:22.409244672Z" level=info msg="Loading containers: start."
	Sep 20 17:31:22 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:31:22.555214453Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 20 17:31:22 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:31:22.602541464Z" level=info msg="Loading containers: done."
	Sep 20 17:31:22 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:31:22.615527856Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.0
	Sep 20 17:31:22 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:31:22.615608187Z" level=info msg="Daemon has completed initialization"
	Sep 20 17:31:22 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:31:22.640153450Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 20 17:31:22 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:31:22.640181954Z" level=info msg="API listen on [::]:2376"
	Sep 20 17:31:22 kubernetes-upgrade-873587 systemd[1]: Started Docker Application Container Engine.
	Sep 20 17:31:46 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:31:46.203208646Z" level=info msg="ignoring event" container=9b3506e9460736d419e9fd2c8a84f3792b91975d0922e44d979f91c9e6cca44f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:32:07 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:32:07.219314787Z" level=info msg="ignoring event" container=0aeea47e1f1e38da46be4a4231afc01dc1cfff2e558ee1e8c78c05c1fcf8adb2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:32:07 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:32:07.311487475Z" level=info msg="ignoring event" container=3e7564736f9c93c46738e82fee3e8a789d26c1a14373cc1c8d7492d3e49ac2b1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:32:21 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:32:21.059938641Z" level=info msg="ignoring event" container=3b2d1b08c2677fe2dca2144c3a149081da6df7b39dc38490a90c0483ab51e4dd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:32:48 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:32:48.246624349Z" level=info msg="ignoring event" container=0c79f21d6fa6d7651e792bf84c9464a73cf0d8faabdb8dd2a3c8eee3480a69b8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:32:52 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:32:52.556789831Z" level=info msg="ignoring event" container=5b749582d4a32488f3d8309729bfe1f0e433edbed1c0c30c239c7b57db8ac1e3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:33:29 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:33:29.040742386Z" level=info msg="ignoring event" container=8cd04d9c4d49bc581c950ff8d4dbc3c3ba6bb9300cc332ee52530b4bd5c6f553 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:33:40 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:33:40.066388875Z" level=info msg="ignoring event" container=66942b88616e91d221966b171e0fa00574895213ad102060b99083d4324c3db5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:34:34 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:34:34.060996505Z" level=info msg="ignoring event" container=5e194648c75f65737ba288b0bbc7f09bacefa4c4b0e263b00372d0919723aebe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:34:45 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:34:45.083894610Z" level=info msg="ignoring event" container=3981f33f9e2e649e957cadd5de32da70cd994f103698365889db9018f8acd9ce module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:35:29 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:35:29.317890818Z" level=info msg="ignoring event" container=5109588067a484876b18d1242c09951545cdc70291f3a3624f284e137014e5ae module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:35:29 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:35:29.392518054Z" level=info msg="ignoring event" container=87b581f1a4b6933206e9c039814bee5ce1eebb29fb7fd803a9880d1320bcfd8f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:35:29 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:35:29.462458405Z" level=info msg="ignoring event" container=0cad685cce60f73ca96e1f651b0bac5e5298251054242ec67d8b1f6f6681cb47 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:35:29 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:35:29.530805054Z" level=info msg="ignoring event" container=b0fe5a4e23af646edad8e10114cf9aa43882490150aa6fd103a2c800b2d10dc8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:35:43 kubernetes-upgrade-873587 systemd[1]: Stopping Docker Application Container Engine...
	Sep 20 17:35:43 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:35:43.137925197Z" level=info msg="Processing signal 'terminated'"
	Sep 20 17:35:43 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:35:43.279976137Z" level=info msg="ignoring event" container=ea685766a237359d8ab7c6cc90e5197f4a10b4cad9033244443be60e2187c31a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:35:43 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:35:43.281701681Z" level=info msg="ignoring event" container=e8bfa342fed94df001310d4868f7bb417a69727ba686e3e84bb17eff26225b28 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:35:43 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:35:43.282893565Z" level=info msg="ignoring event" container=a0af9610a890ffd85d6ecb453d4bdfcddca2d4c443b426574a76a6feb3cc912a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:35:43 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:35:43.345205240Z" level=info msg="ignoring event" container=135b53a106c2835bc991807853e20dda6f7fb0cfe40f1c4ac90b7879f6355c91 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:35:43 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:35:43.347684220Z" level=info msg="ignoring event" container=f16fbf0045d50080ca53e7dfb664b4adf3846de39a9a600e714a03a64ad11841 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:35:43 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:35:43.358258284Z" level=info msg="ignoring event" container=fe99147fb2ad3489546bb6554a27ef459e51083af5aa9ccc8d993c44c4d8b279 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:35:43 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:35:43.361793505Z" level=info msg="ignoring event" container=b3a15ff1810f47cd2a2dad82ccc50b1d56f57266ca53c24321e9937d0852b534 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:35:53 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:35:53.173983349Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=53b942e551be140497adea5859a88dcdad4202f125e0bf56afb6335da0047534
	Sep 20 17:35:53 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:35:53.212400680Z" level=info msg="ignoring event" container=53b942e551be140497adea5859a88dcdad4202f125e0bf56afb6335da0047534 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:35:53 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:35:53.243501737Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 20 17:35:53 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:35:53.244555166Z" level=info msg="Daemon shutdown complete"
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: docker.service: Deactivated successfully.
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: Stopped Docker Application Container Engine.
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: Starting Docker Application Container Engine...
	Sep 20 17:35:53 kubernetes-upgrade-873587 dockerd[13007]: invalid TLS configuration: error reading X509 key pair - make sure the key is not encrypted (cert: "/etc/docker/server.pem", key: "/etc/docker/server-key.pem"): tls: private key does not match public key
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: Failed to start Docker Application Container Engine.
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: docker.service: Scheduled restart job, restart counter is at 1.
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: Stopped Docker Application Container Engine.
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: Starting Docker Application Container Engine...
	Sep 20 17:35:53 kubernetes-upgrade-873587 dockerd[13052]: invalid TLS configuration: error reading X509 key pair - make sure the key is not encrypted (cert: "/etc/docker/server.pem", key: "/etc/docker/server-key.pem"): tls: private key does not match public key
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: Failed to start Docker Application Container Engine.
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: Stopped Docker Application Container Engine.
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: Starting Docker Application Container Engine...
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: docker.service: Deactivated successfully.
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: Stopped Docker Application Container Engine.
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: Starting Docker Application Container Engine...
	Sep 20 17:35:53 kubernetes-upgrade-873587 dockerd[13105]: invalid TLS configuration: error reading X509 key pair - make sure the key is not encrypted (cert: "/etc/docker/server.pem", key: "/etc/docker/server-key.pem"): tls: private key does not match public key
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0920 17:35:53.887091  396597 out.go:270] * 
	* 
	W0920 17:35:53.888413  396597 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 17:35:53.892044  396597 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:277: start after failed upgrade: out/minikube-linux-amd64 start -p kubernetes-upgrade-873587 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: exit status 90
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-09-20 17:35:53.919703878 +0000 UTC m=+3140.092632622
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-873587
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-873587:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "771f670258bddab562b25b86de76a8063ea2f108b004217ce53bb8a7d0be7772",
	        "Created": "2024-09-20T17:30:20.481002321Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 315866,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-20T17:31:10.816450167Z",
	            "FinishedAt": "2024-09-20T17:31:09.93262145Z"
	        },
	        "Image": "sha256:d94335c0cd164ddebb3c5158e317bcf6d2e08dc08f448d25251f425acb842829",
	        "ResolvConfPath": "/var/lib/docker/containers/771f670258bddab562b25b86de76a8063ea2f108b004217ce53bb8a7d0be7772/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/771f670258bddab562b25b86de76a8063ea2f108b004217ce53bb8a7d0be7772/hostname",
	        "HostsPath": "/var/lib/docker/containers/771f670258bddab562b25b86de76a8063ea2f108b004217ce53bb8a7d0be7772/hosts",
	        "LogPath": "/var/lib/docker/containers/771f670258bddab562b25b86de76a8063ea2f108b004217ce53bb8a7d0be7772/771f670258bddab562b25b86de76a8063ea2f108b004217ce53bb8a7d0be7772-json.log",
	        "Name": "/kubernetes-upgrade-873587",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-873587:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "kubernetes-upgrade-873587",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/33c10ad0f79d69c9a25c3bec259aae3ea3a2a85884a2a304ab63967c9c07a851-init/diff:/var/lib/docker/overlay2/04d8ee2bca91b716c0fbed8d5cf8682c2b84f5613656c8faad7ce3474f9e857f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/33c10ad0f79d69c9a25c3bec259aae3ea3a2a85884a2a304ab63967c9c07a851/merged",
	                "UpperDir": "/var/lib/docker/overlay2/33c10ad0f79d69c9a25c3bec259aae3ea3a2a85884a2a304ab63967c9c07a851/diff",
	                "WorkDir": "/var/lib/docker/overlay2/33c10ad0f79d69c9a25c3bec259aae3ea3a2a85884a2a304ab63967c9c07a851/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-873587",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-873587/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-873587",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-873587",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-873587",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1dac016f15f478b44f55747b3c08a37bd25e5ac1ac66a10667bf03fd562170d7",
	            "SandboxKey": "/var/run/docker/netns/1dac016f15f4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33039"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33040"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33043"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33041"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33042"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-873587": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "464212bdcc86ae9876b08873397eca6cbeff2c8c79a59d049797cde8dc9377b7",
	                    "EndpointID": "f866e2e2ab44b047578881fe93abf68660b23c198efac3c6810a21a8fd472396",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "kubernetes-upgrade-873587",
	                        "771f670258bd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-873587 -n kubernetes-upgrade-873587
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-873587 -n kubernetes-upgrade-873587: exit status 2 (364.129683ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-873587 logs -n 25
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p custom-flannel-444657                             | custom-flannel-444657     | jenkins | v1.34.0 | 20 Sep 24 17:35 UTC | 20 Sep 24 17:35 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-444657 sudo                        | custom-flannel-444657     | jenkins | v1.34.0 | 20 Sep 24 17:35 UTC | 20 Sep 24 17:35 UTC |
	|         | systemctl status docker --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-444657                             | custom-flannel-444657     | jenkins | v1.34.0 | 20 Sep 24 17:35 UTC | 20 Sep 24 17:35 UTC |
	|         | sudo systemctl cat docker                            |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-444657 sudo                        | custom-flannel-444657     | jenkins | v1.34.0 | 20 Sep 24 17:35 UTC | 20 Sep 24 17:35 UTC |
	|         | cat /etc/docker/daemon.json                          |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-444657 sudo                        | custom-flannel-444657     | jenkins | v1.34.0 | 20 Sep 24 17:35 UTC | 20 Sep 24 17:35 UTC |
	|         | docker system info                                   |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-444657 sudo                        | custom-flannel-444657     | jenkins | v1.34.0 | 20 Sep 24 17:35 UTC | 20 Sep 24 17:35 UTC |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-444657                             | custom-flannel-444657     | jenkins | v1.34.0 | 20 Sep 24 17:35 UTC | 20 Sep 24 17:35 UTC |
	|         | sudo systemctl cat cri-docker                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-444657 sudo cat                    | custom-flannel-444657     | jenkins | v1.34.0 | 20 Sep 24 17:35 UTC | 20 Sep 24 17:35 UTC |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-444657 sudo cat                    | custom-flannel-444657     | jenkins | v1.34.0 | 20 Sep 24 17:35 UTC | 20 Sep 24 17:35 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-444657 sudo                        | custom-flannel-444657     | jenkins | v1.34.0 | 20 Sep 24 17:35 UTC | 20 Sep 24 17:35 UTC |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-444657 sudo                        | custom-flannel-444657     | jenkins | v1.34.0 | 20 Sep 24 17:35 UTC | 20 Sep 24 17:35 UTC |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-444657                             | custom-flannel-444657     | jenkins | v1.34.0 | 20 Sep 24 17:35 UTC | 20 Sep 24 17:35 UTC |
	|         | sudo systemctl cat containerd                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-444657 sudo cat                    | custom-flannel-444657     | jenkins | v1.34.0 | 20 Sep 24 17:35 UTC | 20 Sep 24 17:35 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-444657                             | custom-flannel-444657     | jenkins | v1.34.0 | 20 Sep 24 17:35 UTC | 20 Sep 24 17:35 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-444657 sudo                        | custom-flannel-444657     | jenkins | v1.34.0 | 20 Sep 24 17:35 UTC | 20 Sep 24 17:35 UTC |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-444657 sudo                        | custom-flannel-444657     | jenkins | v1.34.0 | 20 Sep 24 17:35 UTC |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-444657 sudo                        | custom-flannel-444657     | jenkins | v1.34.0 | 20 Sep 24 17:35 UTC | 20 Sep 24 17:35 UTC |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-444657 sudo                        | custom-flannel-444657     | jenkins | v1.34.0 | 20 Sep 24 17:35 UTC | 20 Sep 24 17:35 UTC |
	|         | find /etc/crio -type f -exec                         |                           |         |         |                     |                     |
	|         | sh -c 'echo {}; cat {}' \;                           |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-444657 sudo                        | custom-flannel-444657     | jenkins | v1.34.0 | 20 Sep 24 17:35 UTC | 20 Sep 24 17:35 UTC |
	|         | crio config                                          |                           |         |         |                     |                     |
	| delete  | -p custom-flannel-444657                             | custom-flannel-444657     | jenkins | v1.34.0 | 20 Sep 24 17:35 UTC | 20 Sep 24 17:35 UTC |
	| start   | -p calico-444657 --memory=3072                       | calico-444657             | jenkins | v1.34.0 | 20 Sep 24 17:35 UTC |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --cni=calico --driver=docker                         |                           |         |         |                     |                     |
	|         | --container-runtime=docker                           |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-873587                         | kubernetes-upgrade-873587 | jenkins | v1.34.0 | 20 Sep 24 17:35 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	|         | --driver=docker                                      |                           |         |         |                     |                     |
	|         | --container-runtime=docker                           |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-873587                         | kubernetes-upgrade-873587 | jenkins | v1.34.0 | 20 Sep 24 17:35 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                                 |                           |         |         |                     |                     |
	|         | --container-runtime=docker                           |                           |         |         |                     |                     |
	| ssh     | -p false-444657 pgrep -a                             | false-444657              | jenkins | v1.34.0 | 20 Sep 24 17:35 UTC | 20 Sep 24 17:35 UTC |
	|         | kubelet                                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-444657 pgrep -a                           | kindnet-444657            | jenkins | v1.34.0 | 20 Sep 24 17:35 UTC | 20 Sep 24 17:35 UTC |
	|         | kubelet                                              |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 17:35:40
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 17:35:40.654083  396597 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:35:40.654484  396597 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:35:40.654524  396597 out.go:358] Setting ErrFile to fd 2...
	I0920 17:35:40.654541  396597 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:35:40.654861  396597 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8616/.minikube/bin
	I0920 17:35:40.655873  396597 out.go:352] Setting JSON to false
	I0920 17:35:40.657891  396597 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":4685,"bootTime":1726849056,"procs":402,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 17:35:40.658070  396597 start.go:139] virtualization: kvm guest
	I0920 17:35:40.660171  396597 out.go:177] * [kubernetes-upgrade-873587] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 17:35:40.661714  396597 notify.go:220] Checking for updates...
	I0920 17:35:40.662910  396597 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 17:35:40.664344  396597 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 17:35:40.665976  396597 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-8616/kubeconfig
	I0920 17:35:40.667716  396597 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8616/.minikube
	I0920 17:35:40.669529  396597 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 17:35:40.671204  396597 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 17:35:40.674323  396597 config.go:182] Loaded profile config "kubernetes-upgrade-873587": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 17:35:40.675087  396597 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 17:35:40.703897  396597 docker.go:123] docker version: linux-27.3.0:Docker Engine - Community
	I0920 17:35:40.703978  396597 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 17:35:40.764249  396597 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:true NGoroutines:88 SystemTime:2024-09-20 17:35:40.748622608 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0920 17:35:40.764371  396597 docker.go:318] overlay module found
	I0920 17:35:40.765999  396597 out.go:177] * Using the docker driver based on existing profile
	I0920 17:35:40.767479  396597 start.go:297] selected driver: docker
	I0920 17:35:40.767501  396597 start.go:901] validating driver "docker" against &{Name:kubernetes-upgrade-873587 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-873587 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:35:40.767639  396597 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 17:35:40.768818  396597 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 17:35:40.819931  396597 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:true NGoroutines:88 SystemTime:2024-09-20 17:35:40.810618549 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0920 17:35:40.820264  396597 cni.go:84] Creating CNI manager for ""
	I0920 17:35:40.820319  396597 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 17:35:40.820364  396597 start.go:340] cluster config:
	{Name:kubernetes-upgrade-873587 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-873587 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:35:40.822545  396597 out.go:177] * Starting "kubernetes-upgrade-873587" primary control-plane node in "kubernetes-upgrade-873587" cluster
	I0920 17:35:40.823856  396597 cache.go:121] Beginning downloading kic base image for docker with docker
	I0920 17:35:40.825231  396597 out.go:177] * Pulling base image v0.0.45-1726784731-19672 ...
	I0920 17:35:40.826860  396597 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 17:35:40.826898  396597 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0920 17:35:40.826908  396597 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-8616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0920 17:35:40.826916  396597 cache.go:56] Caching tarball of preloaded images
	I0920 17:35:40.827173  396597 preload.go:172] Found /home/jenkins/minikube-integration/19672-8616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0920 17:35:40.827202  396597 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 17:35:40.827325  396597 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/kubernetes-upgrade-873587/config.json ...
	W0920 17:35:40.851968  396597 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed is of wrong architecture
	I0920 17:35:40.851989  396597 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0920 17:35:40.852060  396597 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory
	I0920 17:35:40.852072  396597 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory, skipping pull
	I0920 17:35:40.852079  396597 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed exists in cache, skipping pull
	I0920 17:35:40.852085  396597 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed as a tarball
	I0920 17:35:40.852090  396597 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from local cache
	I0920 17:35:40.904264  396597 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from cached tarball
	I0920 17:35:40.904307  396597 cache.go:194] Successfully downloaded all kic artifacts
	I0920 17:35:40.904348  396597 start.go:360] acquireMachinesLock for kubernetes-upgrade-873587: {Name:mk222a66497da7390cc984d4040983c15d2591de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 17:35:40.904414  396597 start.go:364] duration metric: took 43.333µs to acquireMachinesLock for "kubernetes-upgrade-873587"
	I0920 17:35:40.904431  396597 start.go:96] Skipping create...Using existing machine configuration
	I0920 17:35:40.904439  396597 fix.go:54] fixHost starting: 
	I0920 17:35:40.904667  396597 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-873587 --format={{.State.Status}}
	I0920 17:35:40.923685  396597 fix.go:112] recreateIfNeeded on kubernetes-upgrade-873587: state=Running err=<nil>
	W0920 17:35:40.923723  396597 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 17:35:40.926402  396597 out.go:177] * Updating the running docker "kubernetes-upgrade-873587" container ...
	I0920 17:35:40.371668  385035 addons.go:234] Setting addon default-storageclass=true in "kindnet-444657"
	I0920 17:35:40.371713  385035 host.go:66] Checking if "kindnet-444657" exists ...
	I0920 17:35:40.372162  385035 cli_runner.go:164] Run: docker container inspect kindnet-444657 --format={{.State.Status}}
	I0920 17:35:40.372829  385035 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 17:35:40.372845  385035 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 17:35:40.372879  385035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-444657
	I0920 17:35:40.404203  385035 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/kindnet-444657/id_rsa Username:docker}
	I0920 17:35:40.406466  385035 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 17:35:40.406488  385035 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 17:35:40.406548  385035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-444657
	I0920 17:35:40.426834  385035 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/kindnet-444657/id_rsa Username:docker}
	I0920 17:35:40.492498  385035 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0920 17:35:40.494836  385035 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 17:35:40.574613  385035 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 17:35:40.576758  385035 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 17:35:40.976190  385035 start.go:971] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I0920 17:35:40.977810  385035 node_ready.go:35] waiting up to 15m0s for node "kindnet-444657" to be "Ready" ...
	I0920 17:35:40.989368  385035 node_ready.go:49] node "kindnet-444657" has status "Ready":"True"
	I0920 17:35:40.989397  385035 node_ready.go:38] duration metric: took 11.563174ms for node "kindnet-444657" to be "Ready" ...
	I0920 17:35:40.989409  385035 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 17:35:41.052467  385035 pod_ready.go:79] waiting up to 15m0s for pod "etcd-kindnet-444657" in "kube-system" namespace to be "Ready" ...
	I0920 17:35:41.210124  385035 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0920 17:35:37.165719  394246 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19672-8616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-444657:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir: (4.499735958s)
	I0920 17:35:37.165755  394246 kic.go:203] duration metric: took 4.499907058s to extract preloaded images to volume ...
	W0920 17:35:37.165923  394246 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0920 17:35:37.166057  394246 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0920 17:35:37.240113  394246 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-444657 --name calico-444657 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-444657 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-444657 --network calico-444657 --ip 192.168.85.2 --volume calico-444657:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed
	I0920 17:35:37.582452  394246 cli_runner.go:164] Run: docker container inspect calico-444657 --format={{.State.Running}}
	I0920 17:35:37.605311  394246 cli_runner.go:164] Run: docker container inspect calico-444657 --format={{.State.Status}}
	I0920 17:35:37.631556  394246 cli_runner.go:164] Run: docker exec calico-444657 stat /var/lib/dpkg/alternatives/iptables
	I0920 17:35:37.686828  394246 oci.go:144] the created container "calico-444657" has a running status.
	I0920 17:35:37.686862  394246 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19672-8616/.minikube/machines/calico-444657/id_rsa...
	I0920 17:35:37.886480  394246 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19672-8616/.minikube/machines/calico-444657/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0920 17:35:37.912062  394246 cli_runner.go:164] Run: docker container inspect calico-444657 --format={{.State.Status}}
	I0920 17:35:37.937289  394246 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0920 17:35:37.937322  394246 kic_runner.go:114] Args: [docker exec --privileged calico-444657 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0920 17:35:37.990380  394246 cli_runner.go:164] Run: docker container inspect calico-444657 --format={{.State.Status}}
	I0920 17:35:38.009137  394246 machine.go:93] provisionDockerMachine start ...
	I0920 17:35:38.009213  394246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-444657
	I0920 17:35:38.035957  394246 main.go:141] libmachine: Using SSH client type: native
	I0920 17:35:38.036203  394246 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 33089 <nil> <nil>}
	I0920 17:35:38.036219  394246 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 17:35:38.036852  394246 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54830->127.0.0.1:33089: read: connection reset by peer
	I0920 17:35:41.179756  394246 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-444657
	
	I0920 17:35:41.179791  394246 ubuntu.go:169] provisioning hostname "calico-444657"
	I0920 17:35:41.179856  394246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-444657
	I0920 17:35:41.207511  394246 main.go:141] libmachine: Using SSH client type: native
	I0920 17:35:41.207751  394246 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 33089 <nil> <nil>}
	I0920 17:35:41.207775  394246 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-444657 && echo "calico-444657" | sudo tee /etc/hostname
	I0920 17:35:41.367209  394246 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-444657
	
	I0920 17:35:41.367277  394246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-444657
	I0920 17:35:41.392166  394246 main.go:141] libmachine: Using SSH client type: native
	I0920 17:35:41.392363  394246 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 33089 <nil> <nil>}
	I0920 17:35:41.392387  394246 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-444657' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-444657/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-444657' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 17:35:41.211430  385035 addons.go:510] duration metric: took 865.830994ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0920 17:35:41.480811  385035 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-444657" context rescaled to 1 replicas
	I0920 17:35:40.470719  373186 pod_ready.go:103] pod "coredns-7c65d6cfc9-98p92" in "kube-system" namespace has status "Ready":"False"
	I0920 17:35:42.970392  373186 pod_ready.go:103] pod "coredns-7c65d6cfc9-98p92" in "kube-system" namespace has status "Ready":"False"
	I0920 17:35:41.527269  394246 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 17:35:41.527305  394246 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19672-8616/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-8616/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-8616/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-8616/.minikube}
	I0920 17:35:41.527373  394246 ubuntu.go:177] setting up certificates
	I0920 17:35:41.527392  394246 provision.go:84] configureAuth start
	I0920 17:35:41.527461  394246 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-444657
	I0920 17:35:41.548299  394246 provision.go:143] copyHostCerts
	I0920 17:35:41.548370  394246 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8616/.minikube/key.pem, removing ...
	I0920 17:35:41.548385  394246 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8616/.minikube/key.pem
	I0920 17:35:41.548457  394246 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8616/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-8616/.minikube/key.pem (1679 bytes)
	I0920 17:35:41.548559  394246 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8616/.minikube/ca.pem, removing ...
	I0920 17:35:41.548568  394246 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8616/.minikube/ca.pem
	I0920 17:35:41.548593  394246 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8616/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-8616/.minikube/ca.pem (1082 bytes)
	I0920 17:35:41.548666  394246 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8616/.minikube/cert.pem, removing ...
	I0920 17:35:41.548678  394246 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8616/.minikube/cert.pem
	I0920 17:35:41.548714  394246 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8616/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-8616/.minikube/cert.pem (1123 bytes)
	I0920 17:35:41.548771  394246 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-8616/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-8616/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-8616/.minikube/certs/ca-key.pem org=jenkins.calico-444657 san=[127.0.0.1 192.168.85.2 calico-444657 localhost minikube]
	I0920 17:35:41.763685  394246 provision.go:177] copyRemoteCerts
	I0920 17:35:41.763766  394246 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 17:35:41.763817  394246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-444657
	I0920 17:35:41.792813  394246 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33089 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/calico-444657/id_rsa Username:docker}
	I0920 17:35:41.898376  394246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8616/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 17:35:41.923823  394246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8616/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 17:35:41.947877  394246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8616/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 17:35:41.977462  394246 provision.go:87] duration metric: took 450.050845ms to configureAuth
	I0920 17:35:41.977491  394246 ubuntu.go:193] setting minikube options for container-runtime
	I0920 17:35:41.977692  394246 config.go:182] Loaded profile config "calico-444657": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 17:35:41.977746  394246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-444657
	I0920 17:35:42.000891  394246 main.go:141] libmachine: Using SSH client type: native
	I0920 17:35:42.001128  394246 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 33089 <nil> <nil>}
	I0920 17:35:42.001138  394246 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0920 17:35:42.143141  394246 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0920 17:35:42.143168  394246 ubuntu.go:71] root file system type: overlay
	I0920 17:35:42.143304  394246 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0920 17:35:42.143367  394246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-444657
	I0920 17:35:42.165237  394246 main.go:141] libmachine: Using SSH client type: native
	I0920 17:35:42.165457  394246 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 33089 <nil> <nil>}
	I0920 17:35:42.165549  394246 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0920 17:35:42.319047  394246 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0920 17:35:42.319140  394246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-444657
	I0920 17:35:42.340066  394246 main.go:141] libmachine: Using SSH client type: native
	I0920 17:35:42.340287  394246 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 33089 <nil> <nil>}
	I0920 17:35:42.340315  394246 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0920 17:35:43.132600  394246 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-09-19 14:24:32.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-09-20 17:35:42.314207722 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0920 17:35:43.132648  394246 machine.go:96] duration metric: took 5.123487216s to provisionDockerMachine
	I0920 17:35:43.132666  394246 client.go:171] duration metric: took 11.322169566s to LocalClient.Create
	I0920 17:35:43.132689  394246 start.go:167] duration metric: took 11.322249239s to libmachine.API.Create "calico-444657"
	I0920 17:35:43.132703  394246 start.go:293] postStartSetup for "calico-444657" (driver="docker")
	I0920 17:35:43.132716  394246 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 17:35:43.132780  394246 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 17:35:43.132827  394246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-444657
	I0920 17:35:43.161373  394246 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33089 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/calico-444657/id_rsa Username:docker}
	I0920 17:35:43.269153  394246 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 17:35:43.275277  394246 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0920 17:35:43.275334  394246 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0920 17:35:43.275352  394246 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0920 17:35:43.275361  394246 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0920 17:35:43.275374  394246 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8616/.minikube/addons for local assets ...
	I0920 17:35:43.275445  394246 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8616/.minikube/files for local assets ...
	I0920 17:35:43.275553  394246 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-8616/.minikube/files/etc/ssl/certs/153982.pem -> 153982.pem in /etc/ssl/certs
	I0920 17:35:43.275712  394246 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 17:35:43.286638  394246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8616/.minikube/files/etc/ssl/certs/153982.pem --> /etc/ssl/certs/153982.pem (1708 bytes)
	I0920 17:35:43.311941  394246 start.go:296] duration metric: took 179.223692ms for postStartSetup
	I0920 17:35:43.312289  394246 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-444657
	I0920 17:35:43.331722  394246 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/calico-444657/config.json ...
	I0920 17:35:43.331985  394246 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 17:35:43.332031  394246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-444657
	I0920 17:35:43.356359  394246 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33089 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/calico-444657/id_rsa Username:docker}
	I0920 17:35:43.456644  394246 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0920 17:35:43.461491  394246 start.go:128] duration metric: took 11.653309659s to createHost
	I0920 17:35:43.461517  394246 start.go:83] releasing machines lock for "calico-444657", held for 11.653446425s
	I0920 17:35:43.461584  394246 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-444657
	I0920 17:35:43.481394  394246 ssh_runner.go:195] Run: cat /version.json
	I0920 17:35:43.481448  394246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-444657
	I0920 17:35:43.481480  394246 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 17:35:43.481538  394246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-444657
	I0920 17:35:43.501560  394246 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33089 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/calico-444657/id_rsa Username:docker}
	I0920 17:35:43.503336  394246 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33089 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/calico-444657/id_rsa Username:docker}
	I0920 17:35:43.696788  394246 ssh_runner.go:195] Run: systemctl --version
	I0920 17:35:43.701363  394246 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0920 17:35:43.705826  394246 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0920 17:35:43.731061  394246 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0920 17:35:43.731161  394246 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 17:35:43.758285  394246 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0920 17:35:43.758316  394246 start.go:495] detecting cgroup driver to use...
	I0920 17:35:43.758356  394246 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0920 17:35:43.758483  394246 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 17:35:43.775191  394246 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0920 17:35:43.784992  394246 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0920 17:35:43.795050  394246 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0920 17:35:43.795119  394246 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0920 17:35:43.805920  394246 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 17:35:43.817115  394246 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0920 17:35:43.827134  394246 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 17:35:43.836839  394246 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 17:35:43.846078  394246 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0920 17:35:43.856083  394246 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0920 17:35:43.866350  394246 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0920 17:35:43.877072  394246 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 17:35:43.887821  394246 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 17:35:43.898590  394246 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:35:44.000230  394246 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0920 17:35:44.113711  394246 start.go:495] detecting cgroup driver to use...
	I0920 17:35:44.113764  394246 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0920 17:35:44.113821  394246 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0920 17:35:44.125443  394246 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0920 17:35:44.125504  394246 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0920 17:35:44.137274  394246 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 17:35:44.157064  394246 ssh_runner.go:195] Run: which cri-dockerd
	I0920 17:35:44.161318  394246 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0920 17:35:44.171916  394246 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0920 17:35:44.195603  394246 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0920 17:35:44.297467  394246 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0920 17:35:44.400610  394246 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0920 17:35:44.400768  394246 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0920 17:35:44.422241  394246 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:35:44.517019  394246 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0920 17:35:44.893287  394246 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0920 17:35:44.909360  394246 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0920 17:35:44.925586  394246 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0920 17:35:45.030188  394246 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0920 17:35:45.122344  394246 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:35:45.214087  394246 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0920 17:35:45.228992  394246 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0920 17:35:45.239829  394246 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:35:45.329350  394246 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0920 17:35:45.393926  394246 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0920 17:35:45.393983  394246 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0920 17:35:45.398058  394246 start.go:563] Will wait 60s for crictl version
	I0920 17:35:45.398111  394246 ssh_runner.go:195] Run: which crictl
	I0920 17:35:45.401451  394246 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 17:35:45.433880  394246 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.0
	RuntimeApiVersion:  v1
	I0920 17:35:45.433947  394246 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0920 17:35:45.459310  394246 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0920 17:35:40.927737  396597 machine.go:93] provisionDockerMachine start ...
	I0920 17:35:40.927812  396597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-873587
	I0920 17:35:40.944357  396597 main.go:141] libmachine: Using SSH client type: native
	I0920 17:35:40.944657  396597 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 33039 <nil> <nil>}
	I0920 17:35:40.944679  396597 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 17:35:41.086883  396597 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-873587
	
	I0920 17:35:41.086963  396597 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-873587"
	I0920 17:35:41.087103  396597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-873587
	I0920 17:35:41.105674  396597 main.go:141] libmachine: Using SSH client type: native
	I0920 17:35:41.105852  396597 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 33039 <nil> <nil>}
	I0920 17:35:41.105862  396597 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-873587 && echo "kubernetes-upgrade-873587" | sudo tee /etc/hostname
	I0920 17:35:41.266911  396597 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-873587
	
	I0920 17:35:41.267020  396597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-873587
	I0920 17:35:41.290700  396597 main.go:141] libmachine: Using SSH client type: native
	I0920 17:35:41.290944  396597 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 33039 <nil> <nil>}
	I0920 17:35:41.290966  396597 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-873587' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-873587/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-873587' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 17:35:41.431066  396597 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 17:35:41.431096  396597 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19672-8616/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-8616/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-8616/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-8616/.minikube}
	I0920 17:35:41.431118  396597 ubuntu.go:177] setting up certificates
	I0920 17:35:41.431131  396597 provision.go:84] configureAuth start
	I0920 17:35:41.431187  396597 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-873587
	I0920 17:35:41.451077  396597 provision.go:143] copyHostCerts
	I0920 17:35:41.451151  396597 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8616/.minikube/cert.pem, removing ...
	I0920 17:35:41.451163  396597 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8616/.minikube/cert.pem
	I0920 17:35:41.451236  396597 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8616/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-8616/.minikube/cert.pem (1123 bytes)
	I0920 17:35:41.451421  396597 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8616/.minikube/key.pem, removing ...
	I0920 17:35:41.451437  396597 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8616/.minikube/key.pem
	I0920 17:35:41.451481  396597 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8616/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-8616/.minikube/key.pem (1679 bytes)
	I0920 17:35:41.451556  396597 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-8616/.minikube/ca.pem, removing ...
	I0920 17:35:41.451567  396597 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-8616/.minikube/ca.pem
	I0920 17:35:41.451600  396597 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8616/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-8616/.minikube/ca.pem (1082 bytes)
	I0920 17:35:41.451677  396597 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-8616/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-8616/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-8616/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-873587 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-873587 localhost minikube]
	I0920 17:35:41.584969  396597 provision.go:177] copyRemoteCerts
	I0920 17:35:41.585056  396597 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 17:35:41.585110  396597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-873587
	I0920 17:35:41.608577  396597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33039 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/kubernetes-upgrade-873587/id_rsa Username:docker}
	I0920 17:35:41.709264  396597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8616/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 17:35:41.735451  396597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8616/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0920 17:35:41.767666  396597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8616/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 17:35:41.804991  396597 provision.go:87] duration metric: took 373.843691ms to configureAuth
	I0920 17:35:41.805031  396597 ubuntu.go:193] setting minikube options for container-runtime
	I0920 17:35:41.805252  396597 config.go:182] Loaded profile config "kubernetes-upgrade-873587": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 17:35:41.805316  396597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-873587
	I0920 17:35:41.825131  396597 main.go:141] libmachine: Using SSH client type: native
	I0920 17:35:41.825304  396597 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 33039 <nil> <nil>}
	I0920 17:35:41.825315  396597 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0920 17:35:41.960847  396597 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0920 17:35:41.960874  396597 ubuntu.go:71] root file system type: overlay
	I0920 17:35:41.961008  396597 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0920 17:35:41.961076  396597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-873587
	I0920 17:35:41.984303  396597 main.go:141] libmachine: Using SSH client type: native
	I0920 17:35:41.984538  396597 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 33039 <nil> <nil>}
	I0920 17:35:41.984634  396597 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0920 17:35:42.134216  396597 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0920 17:35:42.134287  396597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-873587
	I0920 17:35:42.169215  396597 main.go:141] libmachine: Using SSH client type: native
	I0920 17:35:42.169452  396597 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 33039 <nil> <nil>}
	I0920 17:35:42.169479  396597 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0920 17:35:42.308866  396597 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 17:35:42.308896  396597 machine.go:96] duration metric: took 1.38114105s to provisionDockerMachine
	I0920 17:35:42.308910  396597 start.go:293] postStartSetup for "kubernetes-upgrade-873587" (driver="docker")
	I0920 17:35:42.308923  396597 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 17:35:42.308999  396597 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 17:35:42.309050  396597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-873587
	I0920 17:35:42.331138  396597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33039 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/kubernetes-upgrade-873587/id_rsa Username:docker}
	I0920 17:35:42.432063  396597 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 17:35:42.435238  396597 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0920 17:35:42.435267  396597 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0920 17:35:42.435275  396597 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0920 17:35:42.435282  396597 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0920 17:35:42.435295  396597 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8616/.minikube/addons for local assets ...
	I0920 17:35:42.435351  396597 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8616/.minikube/files for local assets ...
	I0920 17:35:42.435432  396597 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-8616/.minikube/files/etc/ssl/certs/153982.pem -> 153982.pem in /etc/ssl/certs
	I0920 17:35:42.435522  396597 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 17:35:42.443892  396597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8616/.minikube/files/etc/ssl/certs/153982.pem --> /etc/ssl/certs/153982.pem (1708 bytes)
	I0920 17:35:42.467539  396597 start.go:296] duration metric: took 158.612986ms for postStartSetup
	I0920 17:35:42.467648  396597 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 17:35:42.467702  396597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-873587
	I0920 17:35:42.486458  396597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33039 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/kubernetes-upgrade-873587/id_rsa Username:docker}
	I0920 17:35:42.575800  396597 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0920 17:35:42.581049  396597 fix.go:56] duration metric: took 1.676600539s for fixHost
	I0920 17:35:42.581079  396597 start.go:83] releasing machines lock for "kubernetes-upgrade-873587", held for 1.676653276s
	I0920 17:35:42.581153  396597 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-873587
	I0920 17:35:42.599066  396597 ssh_runner.go:195] Run: cat /version.json
	I0920 17:35:42.599114  396597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-873587
	I0920 17:35:42.599141  396597 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 17:35:42.599226  396597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-873587
	I0920 17:35:42.623103  396597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33039 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/kubernetes-upgrade-873587/id_rsa Username:docker}
	I0920 17:35:42.624642  396597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33039 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/kubernetes-upgrade-873587/id_rsa Username:docker}
	I0920 17:35:42.710546  396597 ssh_runner.go:195] Run: systemctl --version
	I0920 17:35:42.791595  396597 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0920 17:35:42.796464  396597 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0920 17:35:42.817745  396597 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0920 17:35:42.817846  396597 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0920 17:35:42.835098  396597 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0920 17:35:42.852196  396597 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 17:35:42.852267  396597 start.go:495] detecting cgroup driver to use...
	I0920 17:35:42.852305  396597 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0920 17:35:42.852421  396597 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 17:35:42.886165  396597 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0920 17:35:42.901176  396597 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0920 17:35:42.913579  396597 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0920 17:35:42.913640  396597 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0920 17:35:42.925128  396597 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 17:35:42.948266  396597 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0920 17:35:42.959039  396597 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 17:35:42.971880  396597 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 17:35:42.981174  396597 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0920 17:35:42.991618  396597 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0920 17:35:43.001728  396597 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0920 17:35:43.012134  396597 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 17:35:43.021091  396597 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 17:35:43.029799  396597 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:35:43.127578  396597 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0920 17:35:45.486187  394246 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.0 ...
	I0920 17:35:45.486271  394246 cli_runner.go:164] Run: docker network inspect calico-444657 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0920 17:35:45.503460  394246 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0920 17:35:45.507179  394246 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 17:35:45.517813  394246 kubeadm.go:883] updating cluster {Name:calico-444657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:calico-444657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 17:35:45.517946  394246 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 17:35:45.518002  394246 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0920 17:35:45.537969  394246 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0920 17:35:45.537997  394246 docker.go:615] Images already preloaded, skipping extraction
	I0920 17:35:45.538103  394246 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0920 17:35:45.558928  394246 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0920 17:35:45.558952  394246 cache_images.go:84] Images are preloaded, skipping loading
	I0920 17:35:45.558964  394246 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.31.1 docker true true} ...
	I0920 17:35:45.559113  394246 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=calico-444657 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:calico-444657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I0920 17:35:45.559183  394246 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0920 17:35:45.601858  394246 cni.go:84] Creating CNI manager for "calico"
	I0920 17:35:45.601885  394246 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 17:35:45.601911  394246 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-444657 NodeName:calico-444657 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 17:35:45.602055  394246 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "calico-444657"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 17:35:45.602113  394246 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 17:35:45.611127  394246 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 17:35:45.611206  394246 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 17:35:45.620261  394246 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0920 17:35:45.637601  394246 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 17:35:45.654706  394246 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0920 17:35:45.672130  394246 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0920 17:35:45.675728  394246 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 17:35:45.686280  394246 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:35:45.766415  394246 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 17:35:45.780326  394246 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/calico-444657 for IP: 192.168.85.2
	I0920 17:35:45.780355  394246 certs.go:194] generating shared ca certs ...
	I0920 17:35:45.780401  394246 certs.go:226] acquiring lock for ca certs: {Name:mk7859bcc6bcc87de2e2da04bdba4ac21b3ab143 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:35:45.780539  394246 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-8616/.minikube/ca.key
	I0920 17:35:45.780584  394246 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-8616/.minikube/proxy-client-ca.key
	I0920 17:35:45.780593  394246 certs.go:256] generating profile certs ...
	I0920 17:35:45.780646  394246 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/calico-444657/client.key
	I0920 17:35:45.780664  394246 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/calico-444657/client.crt with IP's: []
	I0920 17:35:45.998797  394246 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/calico-444657/client.crt ...
	I0920 17:35:45.998830  394246 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/calico-444657/client.crt: {Name:mkb05499894998101e17528d73d27b6bd533e715 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:35:45.999045  394246 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/calico-444657/client.key ...
	I0920 17:35:45.999064  394246 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/calico-444657/client.key: {Name:mk39dfaeb3174d9bc45ec9a94703d550c688ec61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:35:45.999173  394246 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/calico-444657/apiserver.key.8368af3b
	I0920 17:35:45.999192  394246 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/calico-444657/apiserver.crt.8368af3b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I0920 17:35:46.416063  394246 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/calico-444657/apiserver.crt.8368af3b ...
	I0920 17:35:46.416094  394246 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/calico-444657/apiserver.crt.8368af3b: {Name:mk0b41f0b514c321253d2514b606f7dd2c3e7506 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:35:46.416267  394246 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/calico-444657/apiserver.key.8368af3b ...
	I0920 17:35:46.416280  394246 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/calico-444657/apiserver.key.8368af3b: {Name:mk4df89ca2b2c6323361e61fa763989b6a23b707 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:35:46.416356  394246 certs.go:381] copying /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/calico-444657/apiserver.crt.8368af3b -> /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/calico-444657/apiserver.crt
	I0920 17:35:46.416432  394246 certs.go:385] copying /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/calico-444657/apiserver.key.8368af3b -> /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/calico-444657/apiserver.key
	I0920 17:35:46.416483  394246 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/calico-444657/proxy-client.key
	I0920 17:35:46.416498  394246 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/calico-444657/proxy-client.crt with IP's: []
	I0920 17:35:43.058587  385035 pod_ready.go:103] pod "etcd-kindnet-444657" in "kube-system" namespace has status "Ready":"False"
	I0920 17:35:45.058685  385035 pod_ready.go:103] pod "etcd-kindnet-444657" in "kube-system" namespace has status "Ready":"False"
	I0920 17:35:46.558403  385035 pod_ready.go:93] pod "etcd-kindnet-444657" in "kube-system" namespace has status "Ready":"True"
	I0920 17:35:46.558428  385035 pod_ready.go:82] duration metric: took 5.505928712s for pod "etcd-kindnet-444657" in "kube-system" namespace to be "Ready" ...
	I0920 17:35:46.558440  385035 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-kindnet-444657" in "kube-system" namespace to be "Ready" ...
	I0920 17:35:46.563003  385035 pod_ready.go:93] pod "kube-apiserver-kindnet-444657" in "kube-system" namespace has status "Ready":"True"
	I0920 17:35:46.563028  385035 pod_ready.go:82] duration metric: took 4.580214ms for pod "kube-apiserver-kindnet-444657" in "kube-system" namespace to be "Ready" ...
	I0920 17:35:46.563038  385035 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-kindnet-444657" in "kube-system" namespace to be "Ready" ...
	I0920 17:35:46.567500  385035 pod_ready.go:93] pod "kube-controller-manager-kindnet-444657" in "kube-system" namespace has status "Ready":"True"
	I0920 17:35:46.567524  385035 pod_ready.go:82] duration metric: took 4.477423ms for pod "kube-controller-manager-kindnet-444657" in "kube-system" namespace to be "Ready" ...
	I0920 17:35:46.567536  385035 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-kindnet-444657" in "kube-system" namespace to be "Ready" ...
	I0920 17:35:46.571608  385035 pod_ready.go:93] pod "kube-scheduler-kindnet-444657" in "kube-system" namespace has status "Ready":"True"
	I0920 17:35:46.571631  385035 pod_ready.go:82] duration metric: took 4.087011ms for pod "kube-scheduler-kindnet-444657" in "kube-system" namespace to be "Ready" ...
	I0920 17:35:46.571640  385035 pod_ready.go:39] duration metric: took 5.582218453s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 17:35:46.571663  385035 api_server.go:52] waiting for apiserver process to appear ...
	I0920 17:35:46.571717  385035 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 17:35:46.583146  385035 api_server.go:72] duration metric: took 6.237559807s to wait for apiserver process to appear ...
	I0920 17:35:46.583171  385035 api_server.go:88] waiting for apiserver healthz status ...
	I0920 17:35:46.583194  385035 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0920 17:35:46.587528  385035 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0920 17:35:46.588431  385035 api_server.go:141] control plane version: v1.31.1
	I0920 17:35:46.588452  385035 api_server.go:131] duration metric: took 5.275062ms to wait for apiserver health ...
	I0920 17:35:46.588462  385035 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 17:35:46.594048  385035 system_pods.go:59] 8 kube-system pods found
	I0920 17:35:46.594081  385035 system_pods.go:61] "coredns-7c65d6cfc9-4w697" [28cb5009-381e-4b0c-983a-eebcd5669133] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 17:35:46.594089  385035 system_pods.go:61] "etcd-kindnet-444657" [1cc2c1e9-43b2-456a-af66-6b071c726160] Running
	I0920 17:35:46.594097  385035 system_pods.go:61] "kindnet-4zc2c" [b1da0e5c-b279-4202-ba0f-803867ddc411] Running
	I0920 17:35:46.594102  385035 system_pods.go:61] "kube-apiserver-kindnet-444657" [452ab259-9199-462d-b670-5e20a4c7b6a5] Running
	I0920 17:35:46.594110  385035 system_pods.go:61] "kube-controller-manager-kindnet-444657" [50a751fe-6aab-480e-8438-b4fdefd160d2] Running
	I0920 17:35:46.594118  385035 system_pods.go:61] "kube-proxy-5j8kz" [9f8e715a-8e8e-4f34-8321-f2f8c6d47bf7] Running
	I0920 17:35:46.594124  385035 system_pods.go:61] "kube-scheduler-kindnet-444657" [8e08e993-6301-4732-89d2-e5f409ba90f1] Running
	I0920 17:35:46.594131  385035 system_pods.go:61] "storage-provisioner" [2f9d6a27-5b23-4851-915f-61367cc13246] Running
	I0920 17:35:46.594139  385035 system_pods.go:74] duration metric: took 5.670627ms to wait for pod list to return data ...
	I0920 17:35:46.594150  385035 default_sa.go:34] waiting for default service account to be created ...
	I0920 17:35:46.596721  385035 default_sa.go:45] found service account: "default"
	I0920 17:35:46.596743  385035 default_sa.go:55] duration metric: took 2.583637ms for default service account to be created ...
	I0920 17:35:46.596763  385035 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 17:35:46.758343  385035 system_pods.go:86] 8 kube-system pods found
	I0920 17:35:46.758382  385035 system_pods.go:89] "coredns-7c65d6cfc9-4w697" [28cb5009-381e-4b0c-983a-eebcd5669133] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 17:35:46.758391  385035 system_pods.go:89] "etcd-kindnet-444657" [1cc2c1e9-43b2-456a-af66-6b071c726160] Running
	I0920 17:35:46.758400  385035 system_pods.go:89] "kindnet-4zc2c" [b1da0e5c-b279-4202-ba0f-803867ddc411] Running
	I0920 17:35:46.758407  385035 system_pods.go:89] "kube-apiserver-kindnet-444657" [452ab259-9199-462d-b670-5e20a4c7b6a5] Running
	I0920 17:35:46.758413  385035 system_pods.go:89] "kube-controller-manager-kindnet-444657" [50a751fe-6aab-480e-8438-b4fdefd160d2] Running
	I0920 17:35:46.758419  385035 system_pods.go:89] "kube-proxy-5j8kz" [9f8e715a-8e8e-4f34-8321-f2f8c6d47bf7] Running
	I0920 17:35:46.758424  385035 system_pods.go:89] "kube-scheduler-kindnet-444657" [8e08e993-6301-4732-89d2-e5f409ba90f1] Running
	I0920 17:35:46.758429  385035 system_pods.go:89] "storage-provisioner" [2f9d6a27-5b23-4851-915f-61367cc13246] Running
	I0920 17:35:46.758440  385035 system_pods.go:126] duration metric: took 161.670423ms to wait for k8s-apps to be running ...
	I0920 17:35:46.758453  385035 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 17:35:46.758503  385035 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 17:35:46.770652  385035 system_svc.go:56] duration metric: took 12.191803ms WaitForService to wait for kubelet
	I0920 17:35:46.770683  385035 kubeadm.go:582] duration metric: took 6.425107256s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 17:35:46.770707  385035 node_conditions.go:102] verifying NodePressure condition ...
	I0920 17:35:46.956781  385035 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0920 17:35:46.956809  385035 node_conditions.go:123] node cpu capacity is 8
	I0920 17:35:46.956822  385035 node_conditions.go:105] duration metric: took 186.105413ms to run NodePressure ...
	I0920 17:35:46.956832  385035 start.go:241] waiting for startup goroutines ...
	I0920 17:35:46.956839  385035 start.go:246] waiting for cluster config update ...
	I0920 17:35:46.956851  385035 start.go:255] writing updated cluster config ...
	I0920 17:35:46.957125  385035 ssh_runner.go:195] Run: rm -f paused
	I0920 17:35:47.008336  385035 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 17:35:47.011497  385035 out.go:177] * Done! kubectl is now configured to use "kindnet-444657" cluster and "default" namespace by default
	I0920 17:35:44.972059  373186 pod_ready.go:103] pod "coredns-7c65d6cfc9-98p92" in "kube-system" namespace has status "Ready":"False"
	I0920 17:35:45.969192  373186 pod_ready.go:93] pod "coredns-7c65d6cfc9-98p92" in "kube-system" namespace has status "Ready":"True"
	I0920 17:35:45.969214  373186 pod_ready.go:82] duration metric: took 26.005939924s for pod "coredns-7c65d6cfc9-98p92" in "kube-system" namespace to be "Ready" ...
	I0920 17:35:45.969226  373186 pod_ready.go:79] waiting up to 15m0s for pod "etcd-false-444657" in "kube-system" namespace to be "Ready" ...
	I0920 17:35:45.973734  373186 pod_ready.go:93] pod "etcd-false-444657" in "kube-system" namespace has status "Ready":"True"
	I0920 17:35:45.973758  373186 pod_ready.go:82] duration metric: took 4.525628ms for pod "etcd-false-444657" in "kube-system" namespace to be "Ready" ...
	I0920 17:35:45.973767  373186 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-false-444657" in "kube-system" namespace to be "Ready" ...
	I0920 17:35:45.978335  373186 pod_ready.go:93] pod "kube-apiserver-false-444657" in "kube-system" namespace has status "Ready":"True"
	I0920 17:35:45.978356  373186 pod_ready.go:82] duration metric: took 4.583029ms for pod "kube-apiserver-false-444657" in "kube-system" namespace to be "Ready" ...
	I0920 17:35:45.978367  373186 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-false-444657" in "kube-system" namespace to be "Ready" ...
	I0920 17:35:45.982806  373186 pod_ready.go:93] pod "kube-controller-manager-false-444657" in "kube-system" namespace has status "Ready":"True"
	I0920 17:35:45.982825  373186 pod_ready.go:82] duration metric: took 4.452165ms for pod "kube-controller-manager-false-444657" in "kube-system" namespace to be "Ready" ...
	I0920 17:35:45.982833  373186 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-wj445" in "kube-system" namespace to be "Ready" ...
	I0920 17:35:45.986489  373186 pod_ready.go:93] pod "kube-proxy-wj445" in "kube-system" namespace has status "Ready":"True"
	I0920 17:35:45.986513  373186 pod_ready.go:82] duration metric: took 3.673075ms for pod "kube-proxy-wj445" in "kube-system" namespace to be "Ready" ...
	I0920 17:35:45.986524  373186 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-false-444657" in "kube-system" namespace to be "Ready" ...
	I0920 17:35:46.366948  373186 pod_ready.go:93] pod "kube-scheduler-false-444657" in "kube-system" namespace has status "Ready":"True"
	I0920 17:35:46.367021  373186 pod_ready.go:82] duration metric: took 380.485473ms for pod "kube-scheduler-false-444657" in "kube-system" namespace to be "Ready" ...
	I0920 17:35:46.367034  373186 pod_ready.go:39] duration metric: took 37.420304662s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 17:35:46.367061  373186 api_server.go:52] waiting for apiserver process to appear ...
	I0920 17:35:46.367124  373186 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 17:35:46.378937  373186 api_server.go:72] duration metric: took 38.393279179s to wait for apiserver process to appear ...
	I0920 17:35:46.378963  373186 api_server.go:88] waiting for apiserver healthz status ...
	I0920 17:35:46.379014  373186 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0920 17:35:46.383734  373186 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I0920 17:35:46.384631  373186 api_server.go:141] control plane version: v1.31.1
	I0920 17:35:46.384654  373186 api_server.go:131] duration metric: took 5.684972ms to wait for apiserver health ...
	I0920 17:35:46.384662  373186 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 17:35:46.569994  373186 system_pods.go:59] 7 kube-system pods found
	I0920 17:35:46.570018  373186 system_pods.go:61] "coredns-7c65d6cfc9-98p92" [bf146c73-0cfb-466f-9ac0-842092697b88] Running
	I0920 17:35:46.570023  373186 system_pods.go:61] "etcd-false-444657" [c378af05-ef73-4e4b-a713-312735cc5de6] Running
	I0920 17:35:46.570027  373186 system_pods.go:61] "kube-apiserver-false-444657" [78ecbe4d-ded4-4153-af61-23f25fb2d46b] Running
	I0920 17:35:46.570031  373186 system_pods.go:61] "kube-controller-manager-false-444657" [4b944143-513f-4924-8908-5930b5b64ae3] Running
	I0920 17:35:46.570035  373186 system_pods.go:61] "kube-proxy-wj445" [2a8fce1d-19b8-4640-8b9c-aa67f7a5160f] Running
	I0920 17:35:46.570038  373186 system_pods.go:61] "kube-scheduler-false-444657" [b4e2e4d9-598e-4385-8eec-5a8cc24cf748] Running
	I0920 17:35:46.570042  373186 system_pods.go:61] "storage-provisioner" [fa2f6de9-4f6d-47ba-8bfc-3f2a03d0001d] Running
	I0920 17:35:46.570048  373186 system_pods.go:74] duration metric: took 185.38035ms to wait for pod list to return data ...
	I0920 17:35:46.570057  373186 default_sa.go:34] waiting for default service account to be created ...
	I0920 17:35:46.767552  373186 default_sa.go:45] found service account: "default"
	I0920 17:35:46.767577  373186 default_sa.go:55] duration metric: took 197.514919ms for default service account to be created ...
	I0920 17:35:46.767587  373186 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 17:35:46.968963  373186 system_pods.go:86] 7 kube-system pods found
	I0920 17:35:46.968995  373186 system_pods.go:89] "coredns-7c65d6cfc9-98p92" [bf146c73-0cfb-466f-9ac0-842092697b88] Running
	I0920 17:35:46.969002  373186 system_pods.go:89] "etcd-false-444657" [c378af05-ef73-4e4b-a713-312735cc5de6] Running
	I0920 17:35:46.969007  373186 system_pods.go:89] "kube-apiserver-false-444657" [78ecbe4d-ded4-4153-af61-23f25fb2d46b] Running
	I0920 17:35:46.969012  373186 system_pods.go:89] "kube-controller-manager-false-444657" [4b944143-513f-4924-8908-5930b5b64ae3] Running
	I0920 17:35:46.969017  373186 system_pods.go:89] "kube-proxy-wj445" [2a8fce1d-19b8-4640-8b9c-aa67f7a5160f] Running
	I0920 17:35:46.969022  373186 system_pods.go:89] "kube-scheduler-false-444657" [b4e2e4d9-598e-4385-8eec-5a8cc24cf748] Running
	I0920 17:35:46.969027  373186 system_pods.go:89] "storage-provisioner" [fa2f6de9-4f6d-47ba-8bfc-3f2a03d0001d] Running
	I0920 17:35:46.969036  373186 system_pods.go:126] duration metric: took 201.442554ms to wait for k8s-apps to be running ...
	I0920 17:35:46.969048  373186 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 17:35:46.969091  373186 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 17:35:46.984127  373186 system_svc.go:56] duration metric: took 15.06977ms WaitForService to wait for kubelet
	I0920 17:35:46.984159  373186 kubeadm.go:582] duration metric: took 38.998506943s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 17:35:46.984184  373186 node_conditions.go:102] verifying NodePressure condition ...
	I0920 17:35:47.167641  373186 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0920 17:35:47.167673  373186 node_conditions.go:123] node cpu capacity is 8
	I0920 17:35:47.167687  373186 node_conditions.go:105] duration metric: took 183.49684ms to run NodePressure ...
	I0920 17:35:47.167700  373186 start.go:241] waiting for startup goroutines ...
	I0920 17:35:47.167709  373186 start.go:246] waiting for cluster config update ...
	I0920 17:35:47.167722  373186 start.go:255] writing updated cluster config ...
	I0920 17:35:47.168026  373186 ssh_runner.go:195] Run: rm -f paused
	I0920 17:35:47.221377  373186 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 17:35:47.223959  373186 out.go:177] * Done! kubectl is now configured to use "false-444657" cluster and "default" namespace by default
	I0920 17:35:46.646332  394246 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/calico-444657/proxy-client.crt ...
	I0920 17:35:46.646361  394246 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/calico-444657/proxy-client.crt: {Name:mk6fae68ccb56977c52eea77af580e756f0e4e70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:35:46.646568  394246 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/calico-444657/proxy-client.key ...
	I0920 17:35:46.646582  394246 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/calico-444657/proxy-client.key: {Name:mk1fa8a2ba7f14676b9679a19c1ad03040847012 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:35:46.646749  394246 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8616/.minikube/certs/15398.pem (1338 bytes)
	W0920 17:35:46.646783  394246 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-8616/.minikube/certs/15398_empty.pem, impossibly tiny 0 bytes
	I0920 17:35:46.646793  394246 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8616/.minikube/certs/ca-key.pem (1679 bytes)
	I0920 17:35:46.646815  394246 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8616/.minikube/certs/ca.pem (1082 bytes)
	I0920 17:35:46.646839  394246 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8616/.minikube/certs/cert.pem (1123 bytes)
	I0920 17:35:46.646859  394246 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8616/.minikube/certs/key.pem (1679 bytes)
	I0920 17:35:46.646913  394246 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8616/.minikube/files/etc/ssl/certs/153982.pem (1708 bytes)
	I0920 17:35:46.647575  394246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8616/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 17:35:46.672868  394246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8616/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 17:35:46.699266  394246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8616/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 17:35:46.721787  394246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8616/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 17:35:46.744387  394246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/calico-444657/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0920 17:35:46.769629  394246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/calico-444657/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 17:35:46.792742  394246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/calico-444657/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 17:35:46.817326  394246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/calico-444657/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 17:35:46.841229  394246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8616/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 17:35:46.866761  394246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8616/.minikube/certs/15398.pem --> /usr/share/ca-certificates/15398.pem (1338 bytes)
	I0920 17:35:46.899108  394246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8616/.minikube/files/etc/ssl/certs/153982.pem --> /usr/share/ca-certificates/153982.pem (1708 bytes)
	I0920 17:35:46.924462  394246 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 17:35:46.942580  394246 ssh_runner.go:195] Run: openssl version
	I0920 17:35:46.947878  394246 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 17:35:46.958028  394246 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:35:46.961607  394246 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:35:46.961673  394246 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:35:46.969841  394246 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 17:35:46.979761  394246 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15398.pem && ln -fs /usr/share/ca-certificates/15398.pem /etc/ssl/certs/15398.pem"
	I0920 17:35:46.990771  394246 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15398.pem
	I0920 17:35:46.994852  394246 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 16:58 /usr/share/ca-certificates/15398.pem
	I0920 17:35:46.994907  394246 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15398.pem
	I0920 17:35:47.001689  394246 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15398.pem /etc/ssl/certs/51391683.0"
	I0920 17:35:47.012109  394246 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/153982.pem && ln -fs /usr/share/ca-certificates/153982.pem /etc/ssl/certs/153982.pem"
	I0920 17:35:47.021443  394246 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/153982.pem
	I0920 17:35:47.024849  394246 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 16:58 /usr/share/ca-certificates/153982.pem
	I0920 17:35:47.024904  394246 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/153982.pem
	I0920 17:35:47.033511  394246 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/153982.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 17:35:47.044204  394246 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 17:35:47.047509  394246 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 17:35:47.047559  394246 kubeadm.go:392] StartCluster: {Name:calico-444657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:calico-444657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:35:47.047668  394246 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0920 17:35:47.065649  394246 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 17:35:47.074472  394246 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 17:35:47.082753  394246 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0920 17:35:47.082814  394246 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 17:35:47.091652  394246 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 17:35:47.091673  394246 kubeadm.go:157] found existing configuration files:
	
	I0920 17:35:47.091725  394246 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 17:35:47.100170  394246 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 17:35:47.100234  394246 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 17:35:47.108408  394246 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 17:35:47.116650  394246 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 17:35:47.116707  394246 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 17:35:47.125057  394246 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 17:35:47.133119  394246 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 17:35:47.133168  394246 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 17:35:47.140994  394246 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 17:35:47.149256  394246 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 17:35:47.149311  394246 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 17:35:47.157265  394246 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0920 17:35:47.200518  394246 kubeadm.go:310] W0920 17:35:47.199843    1958 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 17:35:47.201067  394246 kubeadm.go:310] W0920 17:35:47.200566    1958 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 17:35:47.222851  394246 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1069-gcp\n", err: exit status 1
	I0920 17:35:47.300996  394246 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 17:35:53.344787  396597 ssh_runner.go:235] Completed: sudo systemctl restart containerd: (10.217170362s)
	I0920 17:35:53.344820  396597 start.go:495] detecting cgroup driver to use...
	I0920 17:35:53.344854  396597 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0920 17:35:53.344903  396597 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0920 17:35:53.366809  396597 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0920 17:35:53.366898  396597 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0920 17:35:53.385901  396597 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 17:35:53.409473  396597 ssh_runner.go:195] Run: which cri-dockerd
	I0920 17:35:53.413938  396597 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0920 17:35:53.426911  396597 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0920 17:35:53.448660  396597 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0920 17:35:53.564176  396597 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0920 17:35:53.677779  396597 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0920 17:35:53.677915  396597 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0920 17:35:53.702618  396597 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:35:53.794319  396597 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0920 17:35:53.861243  396597 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0920 17:35:53.885383  396597 out.go:201] 
	W0920 17:35:53.887032  396597 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 20 17:31:11 kubernetes-upgrade-873587 systemd[1]: Starting Docker Application Container Engine...
	Sep 20 17:31:11 kubernetes-upgrade-873587 dockerd[387]: time="2024-09-20T17:31:11.586108848Z" level=info msg="Starting up"
	Sep 20 17:31:11 kubernetes-upgrade-873587 dockerd[387]: time="2024-09-20T17:31:11.610407430Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 20 17:31:11 kubernetes-upgrade-873587 dockerd[387]: time="2024-09-20T17:31:11.645230579Z" level=info msg="Loading containers: start."
	Sep 20 17:31:11 kubernetes-upgrade-873587 dockerd[387]: time="2024-09-20T17:31:11.788063737Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 20 17:31:11 kubernetes-upgrade-873587 dockerd[387]: time="2024-09-20T17:31:11.835161696Z" level=info msg="Loading containers: done."
	Sep 20 17:31:11 kubernetes-upgrade-873587 dockerd[387]: time="2024-09-20T17:31:11.844767415Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.0
	Sep 20 17:31:11 kubernetes-upgrade-873587 dockerd[387]: time="2024-09-20T17:31:11.844841510Z" level=info msg="Daemon has completed initialization"
	Sep 20 17:31:11 kubernetes-upgrade-873587 dockerd[387]: time="2024-09-20T17:31:11.867956063Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 20 17:31:11 kubernetes-upgrade-873587 dockerd[387]: time="2024-09-20T17:31:11.867986915Z" level=info msg="API listen on [::]:2376"
	Sep 20 17:31:11 kubernetes-upgrade-873587 systemd[1]: Started Docker Application Container Engine.
	Sep 20 17:31:16 kubernetes-upgrade-873587 systemd[1]: Stopping Docker Application Container Engine...
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[387]: time="2024-09-20T17:31:16.133663753Z" level=info msg="Processing signal 'terminated'"
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[387]: time="2024-09-20T17:31:16.135650660Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[387]: time="2024-09-20T17:31:16.136518013Z" level=info msg="Daemon shutdown complete"
	Sep 20 17:31:16 kubernetes-upgrade-873587 systemd[1]: docker.service: Deactivated successfully.
	Sep 20 17:31:16 kubernetes-upgrade-873587 systemd[1]: Stopped Docker Application Container Engine.
	Sep 20 17:31:16 kubernetes-upgrade-873587 systemd[1]: Starting Docker Application Container Engine...
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[857]: time="2024-09-20T17:31:16.249935226Z" level=info msg="Starting up"
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[857]: time="2024-09-20T17:31:16.277956065Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[857]: time="2024-09-20T17:31:16.293392201Z" level=info msg="Loading containers: start."
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[857]: time="2024-09-20T17:31:16.477370680Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[857]: time="2024-09-20T17:31:16.534303909Z" level=info msg="Loading containers: done."
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[857]: time="2024-09-20T17:31:16.552425213Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.0
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[857]: time="2024-09-20T17:31:16.552495324Z" level=info msg="Daemon has completed initialization"
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[857]: time="2024-09-20T17:31:16.585436148Z" level=info msg="API listen on [::]:2376"
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[857]: time="2024-09-20T17:31:16.585451203Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 20 17:31:16 kubernetes-upgrade-873587 systemd[1]: Started Docker Application Container Engine.
	Sep 20 17:31:16 kubernetes-upgrade-873587 systemd[1]: Stopping Docker Application Container Engine...
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[857]: time="2024-09-20T17:31:16.615196449Z" level=info msg="Processing signal 'terminated'"
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[857]: time="2024-09-20T17:31:16.617081286Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[857]: time="2024-09-20T17:31:16.617981804Z" level=info msg="Daemon shutdown complete"
	Sep 20 17:31:16 kubernetes-upgrade-873587 systemd[1]: docker.service: Deactivated successfully.
	Sep 20 17:31:16 kubernetes-upgrade-873587 systemd[1]: Stopped Docker Application Container Engine.
	Sep 20 17:31:16 kubernetes-upgrade-873587 systemd[1]: Starting Docker Application Container Engine...
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[1121]: time="2024-09-20T17:31:16.661934444Z" level=info msg="Starting up"
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[1121]: time="2024-09-20T17:31:16.682593795Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[1121]: time="2024-09-20T17:31:16.692306244Z" level=info msg="Loading containers: start."
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[1121]: time="2024-09-20T17:31:16.856548271Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[1121]: time="2024-09-20T17:31:16.904845813Z" level=info msg="Loading containers: done."
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[1121]: time="2024-09-20T17:31:16.915555552Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.0
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[1121]: time="2024-09-20T17:31:16.915636752Z" level=info msg="Daemon has completed initialization"
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[1121]: time="2024-09-20T17:31:16.940305096Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 20 17:31:16 kubernetes-upgrade-873587 dockerd[1121]: time="2024-09-20T17:31:16.940391459Z" level=info msg="API listen on [::]:2376"
	Sep 20 17:31:16 kubernetes-upgrade-873587 systemd[1]: Started Docker Application Container Engine.
	Sep 20 17:31:20 kubernetes-upgrade-873587 systemd[1]: Stopping Docker Application Container Engine...
	Sep 20 17:31:20 kubernetes-upgrade-873587 dockerd[1121]: time="2024-09-20T17:31:20.432548352Z" level=info msg="Processing signal 'terminated'"
	Sep 20 17:31:20 kubernetes-upgrade-873587 dockerd[1121]: time="2024-09-20T17:31:20.434493419Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 20 17:31:20 kubernetes-upgrade-873587 dockerd[1121]: time="2024-09-20T17:31:20.435517142Z" level=info msg="Daemon shutdown complete"
	Sep 20 17:31:20 kubernetes-upgrade-873587 systemd[1]: docker.service: Deactivated successfully.
	Sep 20 17:31:20 kubernetes-upgrade-873587 systemd[1]: Stopped Docker Application Container Engine.
	Sep 20 17:31:20 kubernetes-upgrade-873587 systemd[1]: Starting Docker Application Container Engine...
	Sep 20 17:31:20 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:31:20.473750102Z" level=info msg="Starting up"
	Sep 20 17:31:20 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:31:20.493449037Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 20 17:31:22 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:31:22.409244672Z" level=info msg="Loading containers: start."
	Sep 20 17:31:22 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:31:22.555214453Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 20 17:31:22 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:31:22.602541464Z" level=info msg="Loading containers: done."
	Sep 20 17:31:22 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:31:22.615527856Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.0
	Sep 20 17:31:22 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:31:22.615608187Z" level=info msg="Daemon has completed initialization"
	Sep 20 17:31:22 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:31:22.640153450Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 20 17:31:22 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:31:22.640181954Z" level=info msg="API listen on [::]:2376"
	Sep 20 17:31:22 kubernetes-upgrade-873587 systemd[1]: Started Docker Application Container Engine.
	Sep 20 17:31:46 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:31:46.203208646Z" level=info msg="ignoring event" container=9b3506e9460736d419e9fd2c8a84f3792b91975d0922e44d979f91c9e6cca44f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:32:07 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:32:07.219314787Z" level=info msg="ignoring event" container=0aeea47e1f1e38da46be4a4231afc01dc1cfff2e558ee1e8c78c05c1fcf8adb2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:32:07 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:32:07.311487475Z" level=info msg="ignoring event" container=3e7564736f9c93c46738e82fee3e8a789d26c1a14373cc1c8d7492d3e49ac2b1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:32:21 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:32:21.059938641Z" level=info msg="ignoring event" container=3b2d1b08c2677fe2dca2144c3a149081da6df7b39dc38490a90c0483ab51e4dd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:32:48 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:32:48.246624349Z" level=info msg="ignoring event" container=0c79f21d6fa6d7651e792bf84c9464a73cf0d8faabdb8dd2a3c8eee3480a69b8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:32:52 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:32:52.556789831Z" level=info msg="ignoring event" container=5b749582d4a32488f3d8309729bfe1f0e433edbed1c0c30c239c7b57db8ac1e3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:33:29 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:33:29.040742386Z" level=info msg="ignoring event" container=8cd04d9c4d49bc581c950ff8d4dbc3c3ba6bb9300cc332ee52530b4bd5c6f553 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:33:40 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:33:40.066388875Z" level=info msg="ignoring event" container=66942b88616e91d221966b171e0fa00574895213ad102060b99083d4324c3db5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:34:34 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:34:34.060996505Z" level=info msg="ignoring event" container=5e194648c75f65737ba288b0bbc7f09bacefa4c4b0e263b00372d0919723aebe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:34:45 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:34:45.083894610Z" level=info msg="ignoring event" container=3981f33f9e2e649e957cadd5de32da70cd994f103698365889db9018f8acd9ce module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:35:29 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:35:29.317890818Z" level=info msg="ignoring event" container=5109588067a484876b18d1242c09951545cdc70291f3a3624f284e137014e5ae module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:35:29 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:35:29.392518054Z" level=info msg="ignoring event" container=87b581f1a4b6933206e9c039814bee5ce1eebb29fb7fd803a9880d1320bcfd8f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:35:29 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:35:29.462458405Z" level=info msg="ignoring event" container=0cad685cce60f73ca96e1f651b0bac5e5298251054242ec67d8b1f6f6681cb47 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:35:29 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:35:29.530805054Z" level=info msg="ignoring event" container=b0fe5a4e23af646edad8e10114cf9aa43882490150aa6fd103a2c800b2d10dc8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:35:43 kubernetes-upgrade-873587 systemd[1]: Stopping Docker Application Container Engine...
	Sep 20 17:35:43 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:35:43.137925197Z" level=info msg="Processing signal 'terminated'"
	Sep 20 17:35:43 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:35:43.279976137Z" level=info msg="ignoring event" container=ea685766a237359d8ab7c6cc90e5197f4a10b4cad9033244443be60e2187c31a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:35:43 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:35:43.281701681Z" level=info msg="ignoring event" container=e8bfa342fed94df001310d4868f7bb417a69727ba686e3e84bb17eff26225b28 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:35:43 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:35:43.282893565Z" level=info msg="ignoring event" container=a0af9610a890ffd85d6ecb453d4bdfcddca2d4c443b426574a76a6feb3cc912a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:35:43 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:35:43.345205240Z" level=info msg="ignoring event" container=135b53a106c2835bc991807853e20dda6f7fb0cfe40f1c4ac90b7879f6355c91 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:35:43 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:35:43.347684220Z" level=info msg="ignoring event" container=f16fbf0045d50080ca53e7dfb664b4adf3846de39a9a600e714a03a64ad11841 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:35:43 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:35:43.358258284Z" level=info msg="ignoring event" container=fe99147fb2ad3489546bb6554a27ef459e51083af5aa9ccc8d993c44c4d8b279 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:35:43 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:35:43.361793505Z" level=info msg="ignoring event" container=b3a15ff1810f47cd2a2dad82ccc50b1d56f57266ca53c24321e9937d0852b534 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:35:53 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:35:53.173983349Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=53b942e551be140497adea5859a88dcdad4202f125e0bf56afb6335da0047534
	Sep 20 17:35:53 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:35:53.212400680Z" level=info msg="ignoring event" container=53b942e551be140497adea5859a88dcdad4202f125e0bf56afb6335da0047534 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:35:53 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:35:53.243501737Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 20 17:35:53 kubernetes-upgrade-873587 dockerd[1551]: time="2024-09-20T17:35:53.244555166Z" level=info msg="Daemon shutdown complete"
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: docker.service: Deactivated successfully.
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: Stopped Docker Application Container Engine.
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: Starting Docker Application Container Engine...
	Sep 20 17:35:53 kubernetes-upgrade-873587 dockerd[13007]: invalid TLS configuration: error reading X509 key pair - make sure the key is not encrypted (cert: "/etc/docker/server.pem", key: "/etc/docker/server-key.pem"): tls: private key does not match public key
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: Failed to start Docker Application Container Engine.
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: docker.service: Scheduled restart job, restart counter is at 1.
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: Stopped Docker Application Container Engine.
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: Starting Docker Application Container Engine...
	Sep 20 17:35:53 kubernetes-upgrade-873587 dockerd[13052]: invalid TLS configuration: error reading X509 key pair - make sure the key is not encrypted (cert: "/etc/docker/server.pem", key: "/etc/docker/server-key.pem"): tls: private key does not match public key
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: Failed to start Docker Application Container Engine.
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: Stopped Docker Application Container Engine.
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: Starting Docker Application Container Engine...
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: docker.service: Deactivated successfully.
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: Stopped Docker Application Container Engine.
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: Starting Docker Application Container Engine...
	Sep 20 17:35:53 kubernetes-upgrade-873587 dockerd[13105]: invalid TLS configuration: error reading X509 key pair - make sure the key is not encrypted (cert: "/etc/docker/server.pem", key: "/etc/docker/server-key.pem"): tls: private key does not match public key
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0920 17:35:53.887091  396597 out.go:270] * 
	W0920 17:35:53.888413  396597 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 17:35:53.892044  396597 out.go:201] 
	
	
	==> Docker <==
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: Failed to start Docker Application Container Engine.
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: Stopped Docker Application Container Engine.
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: Starting Docker Application Container Engine...
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: docker.service: Deactivated successfully.
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: Stopped Docker Application Container Engine.
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: Starting Docker Application Container Engine...
	Sep 20 17:35:53 kubernetes-upgrade-873587 dockerd[13105]: invalid TLS configuration: error reading X509 key pair - make sure the key is not encrypted (cert: "/etc/docker/server.pem", key: "/etc/docker/server-key.pem"): tls: private key does not match public key
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 20 17:35:53 kubernetes-upgrade-873587 systemd[1]: Failed to start Docker Application Container Engine.
	Sep 20 17:35:54 kubernetes-upgrade-873587 systemd[1]: docker.service: Scheduled restart job, restart counter is at 1.
	Sep 20 17:35:54 kubernetes-upgrade-873587 systemd[1]: Stopped Docker Application Container Engine.
	Sep 20 17:35:54 kubernetes-upgrade-873587 systemd[1]: Starting Docker Application Container Engine...
	Sep 20 17:35:54 kubernetes-upgrade-873587 dockerd[13120]: invalid TLS configuration: error reading X509 key pair - make sure the key is not encrypted (cert: "/etc/docker/server.pem", key: "/etc/docker/server-key.pem"): tls: private key does not match public key
	Sep 20 17:35:54 kubernetes-upgrade-873587 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 20 17:35:54 kubernetes-upgrade-873587 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 20 17:35:54 kubernetes-upgrade-873587 systemd[1]: Failed to start Docker Application Container Engine.
	Sep 20 17:35:54 kubernetes-upgrade-873587 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Sep 20 17:35:54 kubernetes-upgrade-873587 systemd[1]: Stopped Docker Application Container Engine.
	Sep 20 17:35:54 kubernetes-upgrade-873587 systemd[1]: docker.service: Start request repeated too quickly.
	Sep 20 17:35:54 kubernetes-upgrade-873587 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 20 17:35:54 kubernetes-upgrade-873587 systemd[1]: Failed to start Docker Application Container Engine.
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	E0920 17:35:54.892920   13235 remote_runtime.go:570] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	time="2024-09-20T17:35:54Z" level=fatal msg="listing containers: rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 0a 3b ce 59 8d 05 08 06
	[ +19.083497] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff de b7 4c f0 22 55 08 06
	[Sep20 17:34] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev bridge
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff aa 73 23 a2 53 80 08 06
	[  +0.506965] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff aa 73 23 a2 53 80 08 06
	[  +0.000260] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f2 a7 5c 8e 02 6d 08 06
	[  +6.157565] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 e3 99 29 bf 48 08 06
	[ +28.038446] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ca 8d af 76 dc d6 08 06
	[ +16.796994] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 51 49 75 c2 12 08 06
	[  +0.000358] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff aa 73 23 a2 53 80 08 06
	[Sep20 17:35] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 be 20 64 2d 2d 08 06
	[  +0.000852] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff ca 8d af 76 dc d6 08 06
	[  +3.937624] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000016] ll header: 00000000: ff ff ff ff ff ff 0a 71 a8 75 51 ab 08 06
	[  +0.091583] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 06 63 ed 7c 84 31 08 06
	
	
	==> kernel <==
	 17:35:55 up  1:18,  0 users,  load average: 6.29, 4.40, 2.73
	Linux kubernetes-upgrade-873587 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kubelet <==
	Sep 20 17:35:50 kubernetes-upgrade-873587 kubelet[12425]: E0920 17:35:50.993585   12425 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},}"
	Sep 20 17:35:50 kubernetes-upgrade-873587 kubelet[12425]: E0920 17:35:50.993662   12425 kuberuntime_sandbox.go:305] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Sep 20 17:35:50 kubernetes-upgrade-873587 kubelet[12425]: E0920 17:35:50.993688   12425 kubelet_pods.go:1191] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Sep 20 17:35:50 kubernetes-upgrade-873587 kubelet[12425]: E0920 17:35:50.993707   12425 kubelet.go:2508] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Sep 20 17:35:51 kubernetes-upgrade-873587 kubelet[12425]: E0920 17:35:51.092084   12425 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="nil"
	Sep 20 17:35:51 kubernetes-upgrade-873587 kubelet[12425]: E0920 17:35:51.092744   12425 kuberuntime_sandbox.go:305] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Sep 20 17:35:51 kubernetes-upgrade-873587 kubelet[12425]: E0920 17:35:51.092862   12425 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Sep 20 17:35:52 kubernetes-upgrade-873587 kubelet[12425]: E0920 17:35:52.094400   12425 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="nil"
	Sep 20 17:35:52 kubernetes-upgrade-873587 kubelet[12425]: E0920 17:35:52.094466   12425 kuberuntime_sandbox.go:305] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Sep 20 17:35:52 kubernetes-upgrade-873587 kubelet[12425]: E0920 17:35:52.094482   12425 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Sep 20 17:35:52 kubernetes-upgrade-873587 kubelet[12425]: E0920 17:35:52.167761   12425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-873587?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused" interval="3.2s"
	Sep 20 17:35:52 kubernetes-upgrade-873587 kubelet[12425]: E0920 17:35:52.992530   12425 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},}"
	Sep 20 17:35:52 kubernetes-upgrade-873587 kubelet[12425]: E0920 17:35:52.992612   12425 kuberuntime_sandbox.go:305] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Sep 20 17:35:52 kubernetes-upgrade-873587 kubelet[12425]: E0920 17:35:52.992633   12425 kubelet_pods.go:1191] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Sep 20 17:35:52 kubernetes-upgrade-873587 kubelet[12425]: E0920 17:35:52.992649   12425 kubelet.go:2508] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Sep 20 17:35:53 kubernetes-upgrade-873587 kubelet[12425]: E0920 17:35:53.096250   12425 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="nil"
	Sep 20 17:35:53 kubernetes-upgrade-873587 kubelet[12425]: E0920 17:35:53.096881   12425 kuberuntime_sandbox.go:305] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Sep 20 17:35:53 kubernetes-upgrade-873587 kubelet[12425]: E0920 17:35:53.096996   12425 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Sep 20 17:35:54 kubernetes-upgrade-873587 kubelet[12425]: E0920 17:35:54.098946   12425 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="nil"
	Sep 20 17:35:54 kubernetes-upgrade-873587 kubelet[12425]: E0920 17:35:54.099056   12425 kuberuntime_sandbox.go:305] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Sep 20 17:35:54 kubernetes-upgrade-873587 kubelet[12425]: E0920 17:35:54.099071   12425 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Sep 20 17:35:54 kubernetes-upgrade-873587 kubelet[12425]: E0920 17:35:54.993166   12425 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},}"
	Sep 20 17:35:54 kubernetes-upgrade-873587 kubelet[12425]: E0920 17:35:54.993240   12425 kuberuntime_sandbox.go:305] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Sep 20 17:35:54 kubernetes-upgrade-873587 kubelet[12425]: E0920 17:35:54.993261   12425 kubelet_pods.go:1191] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Sep 20 17:35:54 kubernetes-upgrade-873587 kubelet[12425]: E0920 17:35:54.993277   12425 kubelet.go:2508] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 17:35:54.684688  400750 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0920 17:35:54.704310  400750 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0920 17:35:54.721400  400750 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0920 17:35:54.738160  400750 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0920 17:35:54.760217  400750 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0920 17:35:54.782503  400750 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0920 17:35:54.804218  400750 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0920 17:35:54.821462  400750 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-873587 -n kubernetes-upgrade-873587
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-873587 -n kubernetes-upgrade-873587: exit status 2 (326.878335ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "kubernetes-upgrade-873587" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-873587" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-873587
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-873587: (2.041010022s)
--- FAIL: TestKubernetesUpgrade (342.86s)

                                                
                                    

Test pass (320/342)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 22.93
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.1/json-events 12.84
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 1.03
18 TestDownloadOnly/v1.31.1/DeleteAll 0.21
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
20 TestDownloadOnlyKic 0.99
21 TestBinaryMirror 0.77
22 TestOffline 46.92
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 210.32
29 TestAddons/serial/Volcano 40.7
31 TestAddons/serial/GCPAuth/Namespaces 0.11
34 TestAddons/parallel/Ingress 20.66
35 TestAddons/parallel/InspektorGadget 10.76
36 TestAddons/parallel/MetricsServer 5.71
38 TestAddons/parallel/CSI 52.88
39 TestAddons/parallel/Headlamp 17.35
40 TestAddons/parallel/CloudSpanner 5.45
41 TestAddons/parallel/LocalPath 54.36
42 TestAddons/parallel/NvidiaDevicePlugin 5.41
43 TestAddons/parallel/Yakd 10.6
44 TestAddons/StoppedEnableDisable 5.85
45 TestCertOptions 25.99
46 TestCertExpiration 231.36
47 TestDockerFlags 36.18
48 TestForceSystemdFlag 37.19
49 TestForceSystemdEnv 26.04
51 TestKVMDriverInstallOrUpdate 4.52
55 TestErrorSpam/setup 21.28
56 TestErrorSpam/start 0.55
57 TestErrorSpam/status 0.85
58 TestErrorSpam/pause 1.13
59 TestErrorSpam/unpause 1.35
60 TestErrorSpam/stop 10.84
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 35.69
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 33.65
67 TestFunctional/serial/KubeContext 0.05
68 TestFunctional/serial/KubectlGetPods 0.08
71 TestFunctional/serial/CacheCmd/cache/add_remote 2.43
72 TestFunctional/serial/CacheCmd/cache/add_local 1.45
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
74 TestFunctional/serial/CacheCmd/cache/list 0.05
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.27
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.24
77 TestFunctional/serial/CacheCmd/cache/delete 0.09
78 TestFunctional/serial/MinikubeKubectlCmd 0.1
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
80 TestFunctional/serial/ExtraConfig 41.08
81 TestFunctional/serial/ComponentHealth 0.07
82 TestFunctional/serial/LogsCmd 0.99
83 TestFunctional/serial/LogsFileCmd 1.01
84 TestFunctional/serial/InvalidService 4.23
86 TestFunctional/parallel/ConfigCmd 0.33
87 TestFunctional/parallel/DashboardCmd 14.44
88 TestFunctional/parallel/DryRun 0.41
89 TestFunctional/parallel/InternationalLanguage 0.17
90 TestFunctional/parallel/StatusCmd 1.09
94 TestFunctional/parallel/ServiceCmdConnect 8.57
95 TestFunctional/parallel/AddonsCmd 0.13
96 TestFunctional/parallel/PersistentVolumeClaim 44.91
98 TestFunctional/parallel/SSHCmd 0.52
99 TestFunctional/parallel/CpCmd 1.83
100 TestFunctional/parallel/MySQL 25.05
101 TestFunctional/parallel/FileSync 0.28
102 TestFunctional/parallel/CertSync 1.86
106 TestFunctional/parallel/NodeLabels 0.06
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.27
110 TestFunctional/parallel/License 0.68
111 TestFunctional/parallel/ServiceCmd/DeployApp 10.2
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.53
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 12.26
117 TestFunctional/parallel/ProfileCmd/profile_not_create 0.41
118 TestFunctional/parallel/ProfileCmd/profile_list 0.36
119 TestFunctional/parallel/ProfileCmd/profile_json_output 0.35
120 TestFunctional/parallel/MountCmd/any-port 7.54
121 TestFunctional/parallel/ServiceCmd/List 0.34
122 TestFunctional/parallel/ServiceCmd/JSONOutput 0.3
123 TestFunctional/parallel/ServiceCmd/HTTPS 0.33
124 TestFunctional/parallel/ServiceCmd/Format 0.37
125 TestFunctional/parallel/ServiceCmd/URL 0.49
126 TestFunctional/parallel/MountCmd/specific-port 1.96
127 TestFunctional/parallel/MountCmd/VerifyCleanup 1.16
128 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.15
129 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
133 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
134 TestFunctional/parallel/Version/short 0.05
135 TestFunctional/parallel/Version/components 0.76
136 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
137 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
138 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
139 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
140 TestFunctional/parallel/ImageCommands/ImageBuild 4.52
141 TestFunctional/parallel/ImageCommands/Setup 1.9
142 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.9
143 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.74
144 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.62
145 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.3
146 TestFunctional/parallel/ImageCommands/ImageRemove 0.46
147 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.9
148 TestFunctional/parallel/DockerEnv/bash 1.3
149 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.5
150 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
151 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
152 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.01
159 TestMultiControlPlane/serial/StartCluster 101.73
160 TestMultiControlPlane/serial/DeployApp 6.28
161 TestMultiControlPlane/serial/PingHostFromPods 1.04
162 TestMultiControlPlane/serial/AddWorkerNode 20.46
163 TestMultiControlPlane/serial/NodeLabels 0.06
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.84
165 TestMultiControlPlane/serial/CopyFile 15.72
166 TestMultiControlPlane/serial/StopSecondaryNode 11.36
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.67
168 TestMultiControlPlane/serial/RestartSecondaryNode 23.45
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.14
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 261.82
171 TestMultiControlPlane/serial/DeleteSecondaryNode 9.35
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.65
173 TestMultiControlPlane/serial/StopCluster 32.44
174 TestMultiControlPlane/serial/RestartCluster 79.26
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.66
176 TestMultiControlPlane/serial/AddSecondaryNode 31.37
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.86
180 TestImageBuild/serial/Setup 20.51
181 TestImageBuild/serial/NormalBuild 2.59
182 TestImageBuild/serial/BuildWithBuildArg 0.98
183 TestImageBuild/serial/BuildWithDockerIgnore 0.91
184 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.7
188 TestJSONOutput/start/Command 72.08
189 TestJSONOutput/start/Audit 0
191 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/pause/Command 0.53
195 TestJSONOutput/pause/Audit 0
197 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/unpause/Command 0.41
201 TestJSONOutput/unpause/Audit 0
203 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/stop/Command 10.81
207 TestJSONOutput/stop/Audit 0
209 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
210 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
211 TestErrorJSONOutput 0.2
213 TestKicCustomNetwork/create_custom_network 26.81
214 TestKicCustomNetwork/use_default_bridge_network 23.9
215 TestKicExistingNetwork 25.79
216 TestKicCustomSubnet 26.56
217 TestKicStaticIP 25.8
218 TestMainNoArgs 0.04
219 TestMinikubeProfile 50.18
222 TestMountStart/serial/StartWithMountFirst 10.06
223 TestMountStart/serial/VerifyMountFirst 0.24
224 TestMountStart/serial/StartWithMountSecond 10.37
225 TestMountStart/serial/VerifyMountSecond 0.24
226 TestMountStart/serial/DeleteFirst 1.46
227 TestMountStart/serial/VerifyMountPostDelete 0.24
228 TestMountStart/serial/Stop 1.17
229 TestMountStart/serial/RestartStopped 8.67
230 TestMountStart/serial/VerifyMountPostStop 0.24
233 TestMultiNode/serial/FreshStart2Nodes 69.76
234 TestMultiNode/serial/DeployApp2Nodes 39.02
235 TestMultiNode/serial/PingHostFrom2Pods 0.72
236 TestMultiNode/serial/AddNode 18.57
237 TestMultiNode/serial/MultiNodeLabels 0.07
238 TestMultiNode/serial/ProfileList 0.66
239 TestMultiNode/serial/CopyFile 8.96
240 TestMultiNode/serial/StopNode 2.09
241 TestMultiNode/serial/StartAfterStop 9.87
242 TestMultiNode/serial/RestartKeepsNodes 100.32
243 TestMultiNode/serial/DeleteNode 5.22
244 TestMultiNode/serial/StopMultiNode 21.43
245 TestMultiNode/serial/RestartMultiNode 55.68
246 TestMultiNode/serial/ValidateNameConflict 26.5
251 TestPreload 148.06
253 TestScheduledStopUnix 94.31
254 TestSkaffold 107.75
256 TestInsufficientStorage 12.64
257 TestRunningBinaryUpgrade 80.95
260 TestMissingContainerUpgrade 153.36
262 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
263 TestNoKubernetes/serial/StartWithK8s 34.57
264 TestNoKubernetes/serial/StartWithStopK8s 17.68
276 TestStoppedBinaryUpgrade/Setup 2.42
277 TestStoppedBinaryUpgrade/Upgrade 144.3
278 TestNoKubernetes/serial/Start 7.62
279 TestNoKubernetes/serial/VerifyK8sNotRunning 0.32
280 TestNoKubernetes/serial/ProfileList 6.27
281 TestNoKubernetes/serial/Stop 1.2
282 TestNoKubernetes/serial/StartNoArgs 8.17
283 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.24
284 TestStoppedBinaryUpgrade/MinikubeLogs 1.12
293 TestPause/serial/Start 70.08
294 TestNetworkPlugins/group/auto/Start 68.68
295 TestPause/serial/SecondStartNoReconfiguration 33.07
296 TestNetworkPlugins/group/custom-flannel/Start 33.18
297 TestPause/serial/Pause 0.52
298 TestPause/serial/VerifyStatus 0.29
299 TestPause/serial/Unpause 0.48
300 TestPause/serial/PauseAgain 0.59
301 TestPause/serial/DeletePaused 2.23
302 TestPause/serial/VerifyDeletedResources 14.63
303 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.27
304 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.23
305 TestNetworkPlugins/group/false/Start 68.85
306 TestNetworkPlugins/group/custom-flannel/DNS 24.76
307 TestNetworkPlugins/group/auto/KubeletFlags 0.45
308 TestNetworkPlugins/group/auto/NetCatPod 9.48
309 TestNetworkPlugins/group/auto/DNS 0.16
310 TestNetworkPlugins/group/auto/Localhost 0.14
311 TestNetworkPlugins/group/auto/HairPin 0.13
312 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
313 TestNetworkPlugins/group/custom-flannel/HairPin 6.58
314 TestNetworkPlugins/group/kindnet/Start 34.54
315 TestNetworkPlugins/group/calico/Start 62.06
316 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
317 TestNetworkPlugins/group/false/KubeletFlags 0.3
318 TestNetworkPlugins/group/false/NetCatPod 10.24
319 TestNetworkPlugins/group/kindnet/KubeletFlags 0.32
320 TestNetworkPlugins/group/kindnet/NetCatPod 9.27
321 TestNetworkPlugins/group/flannel/Start 50.27
322 TestNetworkPlugins/group/false/DNS 0.16
323 TestNetworkPlugins/group/false/Localhost 0.17
324 TestNetworkPlugins/group/false/HairPin 0.14
325 TestNetworkPlugins/group/kindnet/DNS 26.93
326 TestNetworkPlugins/group/enable-default-cni/Start 36.24
327 TestNetworkPlugins/group/kindnet/Localhost 0.12
328 TestNetworkPlugins/group/kindnet/HairPin 0.13
329 TestNetworkPlugins/group/calico/ControllerPod 6.01
330 TestNetworkPlugins/group/calico/KubeletFlags 0.4
331 TestNetworkPlugins/group/calico/NetCatPod 10.24
332 TestNetworkPlugins/group/flannel/ControllerPod 6.01
333 TestNetworkPlugins/group/calico/DNS 0.13
334 TestNetworkPlugins/group/calico/Localhost 0.12
335 TestNetworkPlugins/group/calico/HairPin 0.12
336 TestNetworkPlugins/group/bridge/Start 66.99
337 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
338 TestNetworkPlugins/group/flannel/NetCatPod 11.18
339 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
340 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.31
341 TestNetworkPlugins/group/flannel/DNS 0.16
342 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
343 TestNetworkPlugins/group/flannel/Localhost 0.14
344 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
345 TestNetworkPlugins/group/flannel/HairPin 0.16
346 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
347 TestNetworkPlugins/group/kubenet/Start 44.7
349 TestStartStop/group/old-k8s-version/serial/FirstStart 132.63
351 TestStartStop/group/embed-certs/serial/FirstStart 71.76
352 TestNetworkPlugins/group/kubenet/KubeletFlags 0.29
353 TestNetworkPlugins/group/kubenet/NetCatPod 10.2
354 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
355 TestNetworkPlugins/group/bridge/NetCatPod 11.23
356 TestNetworkPlugins/group/kubenet/DNS 0.13
357 TestNetworkPlugins/group/kubenet/Localhost 0.11
358 TestNetworkPlugins/group/kubenet/HairPin 0.11
359 TestNetworkPlugins/group/bridge/DNS 0.13
360 TestNetworkPlugins/group/bridge/Localhost 0.11
361 TestNetworkPlugins/group/bridge/HairPin 0.1
363 TestStartStop/group/no-preload/serial/FirstStart 43.59
365 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 69.55
366 TestStartStop/group/embed-certs/serial/DeployApp 9.47
367 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.11
368 TestStartStop/group/embed-certs/serial/Stop 10.95
369 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.27
370 TestStartStop/group/embed-certs/serial/SecondStart 262.96
371 TestStartStop/group/no-preload/serial/DeployApp 9.26
372 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.87
373 TestStartStop/group/no-preload/serial/Stop 10.69
374 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.25
375 TestStartStop/group/no-preload/serial/SecondStart 262.55
376 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.32
377 TestStartStop/group/old-k8s-version/serial/DeployApp 9.45
378 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.95
379 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.27
380 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.82
381 TestStartStop/group/old-k8s-version/serial/Stop 10.77
382 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.17
383 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 266.18
384 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
385 TestStartStop/group/old-k8s-version/serial/SecondStart 137.7
386 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
387 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
388 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.21
389 TestStartStop/group/old-k8s-version/serial/Pause 2.31
391 TestStartStop/group/newest-cni/serial/FirstStart 29.33
392 TestStartStop/group/newest-cni/serial/DeployApp 0
393 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.16
394 TestStartStop/group/newest-cni/serial/Stop 10.79
395 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.17
396 TestStartStop/group/newest-cni/serial/SecondStart 15.17
397 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
398 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
399 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
400 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
401 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
402 TestStartStop/group/newest-cni/serial/Pause 2.39
403 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
404 TestStartStop/group/embed-certs/serial/Pause 2.35
405 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
406 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
407 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.2
408 TestStartStop/group/no-preload/serial/Pause 2.34
409 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
410 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
411 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.2
412 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.32
x
+
TestDownloadOnly/v1.20.0/json-events (22.93s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-830893 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-830893 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (22.926408934s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (22.93s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0920 16:43:56.787718   15398 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0920 16:43:56.787823   15398 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-8616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-830893
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-830893: exit status 85 (58.303373ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-830893 | jenkins | v1.34.0 | 20 Sep 24 16:43 UTC |          |
	|         | -p download-only-830893        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 16:43:33
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 16:43:33.897872   15410 out.go:345] Setting OutFile to fd 1 ...
	I0920 16:43:33.897962   15410 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 16:43:33.897967   15410 out.go:358] Setting ErrFile to fd 2...
	I0920 16:43:33.897971   15410 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 16:43:33.898121   15410 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8616/.minikube/bin
	W0920 16:43:33.898228   15410 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19672-8616/.minikube/config/config.json: open /home/jenkins/minikube-integration/19672-8616/.minikube/config/config.json: no such file or directory
	I0920 16:43:33.898786   15410 out.go:352] Setting JSON to true
	I0920 16:43:33.899680   15410 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":1558,"bootTime":1726849056,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 16:43:33.899782   15410 start.go:139] virtualization: kvm guest
	I0920 16:43:33.902156   15410 out.go:97] [download-only-830893] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0920 16:43:33.902265   15410 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19672-8616/.minikube/cache/preloaded-tarball: no such file or directory
	I0920 16:43:33.902315   15410 notify.go:220] Checking for updates...
	I0920 16:43:33.903677   15410 out.go:169] MINIKUBE_LOCATION=19672
	I0920 16:43:33.905101   15410 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 16:43:33.906621   15410 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19672-8616/kubeconfig
	I0920 16:43:33.908083   15410 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8616/.minikube
	I0920 16:43:33.909449   15410 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0920 16:43:33.911852   15410 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0920 16:43:33.912060   15410 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 16:43:33.933270   15410 docker.go:123] docker version: linux-27.3.0:Docker Engine - Community
	I0920 16:43:33.933335   15410 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 16:43:34.295520   15410 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-20 16:43:34.286487957 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0920 16:43:34.295624   15410 docker.go:318] overlay module found
	I0920 16:43:34.297661   15410 out.go:97] Using the docker driver based on user configuration
	I0920 16:43:34.297691   15410 start.go:297] selected driver: docker
	I0920 16:43:34.297697   15410 start.go:901] validating driver "docker" against <nil>
	I0920 16:43:34.297771   15410 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 16:43:34.346629   15410 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-20 16:43:34.338379702 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0920 16:43:34.346839   15410 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 16:43:34.347406   15410 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0920 16:43:34.347572   15410 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 16:43:34.349487   15410 out.go:169] Using Docker driver with root privileges
	I0920 16:43:34.350892   15410 cni.go:84] Creating CNI manager for ""
	I0920 16:43:34.350966   15410 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0920 16:43:34.351058   15410 start.go:340] cluster config:
	{Name:download-only-830893 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-830893 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 16:43:34.352582   15410 out.go:97] Starting "download-only-830893" primary control-plane node in "download-only-830893" cluster
	I0920 16:43:34.352608   15410 cache.go:121] Beginning downloading kic base image for docker with docker
	I0920 16:43:34.353932   15410 out.go:97] Pulling base image v0.0.45-1726784731-19672 ...
	I0920 16:43:34.353983   15410 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0920 16:43:34.354125   15410 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0920 16:43:34.369725   15410 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0920 16:43:34.369907   15410 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory
	I0920 16:43:34.370009   15410 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0920 16:43:34.522478   15410 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0920 16:43:34.522523   15410 cache.go:56] Caching tarball of preloaded images
	I0920 16:43:34.522693   15410 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0920 16:43:34.524890   15410 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0920 16:43:34.524916   15410 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0920 16:43:34.629114   15410 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /home/jenkins/minikube-integration/19672-8616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0920 16:43:45.279093   15410 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0920 16:43:45.279192   15410 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19672-8616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0920 16:43:46.067910   15410 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0920 16:43:46.068245   15410 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/download-only-830893/config.json ...
	I0920 16:43:46.068272   15410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/download-only-830893/config.json: {Name:mk2022f20641d132b63073774f81f3f2a23faf4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:43:46.068447   15410 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0920 16:43:46.068600   15410 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19672-8616/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-830893 host does not exist
	  To start a cluster, run: "minikube start -p download-only-830893"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-830893
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (12.84s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-192555 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-192555 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker: (12.837446794s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (12.84s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0920 16:44:10.001014   15398 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0920 16:44:10.001064   15398 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-8616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (1.03s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-192555
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-192555: exit status 85 (1.027721688s)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-830893 | jenkins | v1.34.0 | 20 Sep 24 16:43 UTC |                     |
	|         | -p download-only-830893        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 20 Sep 24 16:43 UTC | 20 Sep 24 16:43 UTC |
	| delete  | -p download-only-830893        | download-only-830893 | jenkins | v1.34.0 | 20 Sep 24 16:43 UTC | 20 Sep 24 16:43 UTC |
	| start   | -o=json --download-only        | download-only-192555 | jenkins | v1.34.0 | 20 Sep 24 16:43 UTC |                     |
	|         | -p download-only-192555        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 16:43:57
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 16:43:57.199235   15818 out.go:345] Setting OutFile to fd 1 ...
	I0920 16:43:57.199342   15818 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 16:43:57.199351   15818 out.go:358] Setting ErrFile to fd 2...
	I0920 16:43:57.199356   15818 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 16:43:57.199535   15818 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8616/.minikube/bin
	I0920 16:43:57.200071   15818 out.go:352] Setting JSON to true
	I0920 16:43:57.200927   15818 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":1581,"bootTime":1726849056,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 16:43:57.201029   15818 start.go:139] virtualization: kvm guest
	I0920 16:43:57.203231   15818 out.go:97] [download-only-192555] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 16:43:57.203427   15818 notify.go:220] Checking for updates...
	I0920 16:43:57.204772   15818 out.go:169] MINIKUBE_LOCATION=19672
	I0920 16:43:57.206166   15818 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 16:43:57.207452   15818 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19672-8616/kubeconfig
	I0920 16:43:57.209083   15818 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8616/.minikube
	I0920 16:43:57.210679   15818 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0920 16:43:57.213431   15818 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0920 16:43:57.213715   15818 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 16:43:57.235704   15818 docker.go:123] docker version: linux-27.3.0:Docker Engine - Community
	I0920 16:43:57.235796   15818 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 16:43:57.284512   15818 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-20 16:43:57.275776343 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0920 16:43:57.284610   15818 docker.go:318] overlay module found
	I0920 16:43:57.286418   15818 out.go:97] Using the docker driver based on user configuration
	I0920 16:43:57.286438   15818 start.go:297] selected driver: docker
	I0920 16:43:57.286443   15818 start.go:901] validating driver "docker" against <nil>
	I0920 16:43:57.286514   15818 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 16:43:57.329318   15818 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-20 16:43:57.321078356 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0920 16:43:57.329470   15818 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 16:43:57.329962   15818 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0920 16:43:57.330084   15818 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 16:43:57.332174   15818 out.go:169] Using Docker driver with root privileges
	I0920 16:43:57.333644   15818 cni.go:84] Creating CNI manager for ""
	I0920 16:43:57.333717   15818 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 16:43:57.333730   15818 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 16:43:57.333806   15818 start.go:340] cluster config:
	{Name:download-only-192555 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-192555 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 16:43:57.335232   15818 out.go:97] Starting "download-only-192555" primary control-plane node in "download-only-192555" cluster
	I0920 16:43:57.335255   15818 cache.go:121] Beginning downloading kic base image for docker with docker
	I0920 16:43:57.336479   15818 out.go:97] Pulling base image v0.0.45-1726784731-19672 ...
	I0920 16:43:57.336504   15818 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 16:43:57.336638   15818 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0920 16:43:57.352058   15818 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0920 16:43:57.352187   15818 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory
	I0920 16:43:57.352204   15818 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory, skipping pull
	I0920 16:43:57.352210   15818 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed exists in cache, skipping pull
	I0920 16:43:57.352222   15818 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed as a tarball
	I0920 16:43:57.818253   15818 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0920 16:43:57.818284   15818 cache.go:56] Caching tarball of preloaded images
	I0920 16:43:57.818472   15818 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 16:43:57.820430   15818 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0920 16:43:57.820451   15818 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
	I0920 16:43:57.928714   15818 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4?checksum=md5:42e9a173dd5f0c45ed1a890dd06aec5a -> /home/jenkins/minikube-integration/19672-8616/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-192555 host does not exist
	  To start a cluster, run: "minikube start -p download-only-192555"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (1.03s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-192555
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.99s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-226389 --alsologtostderr --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "download-docker-226389" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-226389
--- PASS: TestDownloadOnlyKic (0.99s)

                                                
                                    
x
+
TestBinaryMirror (0.77s)

                                                
                                                
=== RUN   TestBinaryMirror
I0920 16:44:12.624457   15398 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-950195 --alsologtostderr --binary-mirror http://127.0.0.1:35633 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-950195" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-950195
--- PASS: TestBinaryMirror (0.77s)

                                                
                                    
x
+
TestOffline (46.92s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-451121 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-451121 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (44.134574253s)
helpers_test.go:175: Cleaning up "offline-docker-451121" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-451121
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-451121: (2.782593126s)
--- PASS: TestOffline (46.92s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-205029
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-205029: exit status 85 (50.877459ms)

                                                
                                                
-- stdout --
	* Profile "addons-205029" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-205029"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-205029
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-205029: exit status 85 (48.347604ms)

                                                
                                                
-- stdout --
	* Profile "addons-205029" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-205029"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (210.32s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-205029 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-205029 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns: (3m30.319003071s)
--- PASS: TestAddons/Setup (210.32s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.7s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:851: volcano-controller stabilized in 10.156096ms
addons_test.go:835: volcano-scheduler stabilized in 10.418437ms
addons_test.go:843: volcano-admission stabilized in 10.503578ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-5pdbq" [11c956fb-c5ec-4da8-aed7-91192afed612] Running
addons_test.go:857: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.003252147s
addons_test.go:861: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-fwb42" [bf7d2cf6-66b7-467e-b67a-ac8a6f65a855] Running
addons_test.go:861: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003303616s
addons_test.go:865: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-wdrt2" [c2aecece-239a-4e6e-b0bd-b76d426344fd] Running
addons_test.go:865: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003435764s
addons_test.go:870: (dbg) Run:  kubectl --context addons-205029 delete -n volcano-system job volcano-admission-init
addons_test.go:876: (dbg) Run:  kubectl --context addons-205029 create -f testdata/vcjob.yaml
addons_test.go:884: (dbg) Run:  kubectl --context addons-205029 get vcjob -n my-volcano
addons_test.go:902: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [ab422656-c6ab-4d27-98b5-f3b42e59e5a1] Pending
helpers_test.go:344: "test-job-nginx-0" [ab422656-c6ab-4d27-98b5-f3b42e59e5a1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [ab422656-c6ab-4d27-98b5-f3b42e59e5a1] Running
addons_test.go:902: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 15.002990743s
addons_test.go:906: (dbg) Run:  out/minikube-linux-amd64 -p addons-205029 addons disable volcano --alsologtostderr -v=1
addons_test.go:906: (dbg) Done: out/minikube-linux-amd64 -p addons-205029 addons disable volcano --alsologtostderr -v=1: (10.360531728s)
--- PASS: TestAddons/serial/Volcano (40.70s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-205029 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-205029 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.66s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-205029 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-205029 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-205029 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [a95b6985-5a3b-401c-b849-2ccafacb3bec] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [a95b6985-5a3b-401c-b849-2ccafacb3bec] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.003653897s
I0920 16:57:23.064256   15398 kapi.go:150] Service nginx in namespace default found.
addons_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p addons-205029 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: (dbg) Run:  kubectl --context addons-205029 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p addons-205029 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p addons-205029 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:309: (dbg) Run:  out/minikube-linux-amd64 -p addons-205029 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-linux-amd64 -p addons-205029 addons disable ingress --alsologtostderr -v=1: (7.557157985s)
--- PASS: TestAddons/parallel/Ingress (20.66s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.76s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-t9gbj" [ea61c8a0-292f-47e7-8b7f-eb8e4aa9d01f] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003897678s
addons_test.go:789: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-205029
addons_test.go:789: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-205029: (5.753598776s)
--- PASS: TestAddons/parallel/InspektorGadget (10.76s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.71s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
I0920 16:56:26.910663   15398 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:405: metrics-server stabilized in 2.875034ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-44j97" [d4cc25a2-9517-4e7b-9fa5-57b6a061d910] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.002534689s
addons_test.go:413: (dbg) Run:  kubectl --context addons-205029 top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p addons-205029 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.71s)

                                                
                                    
x
+
TestAddons/parallel/CSI (52.88s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:505: csi-hostpath-driver pods stabilized in 4.075509ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-205029 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205029 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205029 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205029 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205029 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205029 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205029 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205029 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205029 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-205029 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [414197fd-1923-42a4-98a4-ca17d466a863] Pending
helpers_test.go:344: "task-pv-pod" [414197fd-1923-42a4-98a4-ca17d466a863] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [414197fd-1923-42a4-98a4-ca17d466a863] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.002727059s
addons_test.go:528: (dbg) Run:  kubectl --context addons-205029 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-205029 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-205029 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-205029 delete pod task-pv-pod
addons_test.go:544: (dbg) Run:  kubectl --context addons-205029 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-205029 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205029 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205029 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205029 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205029 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205029 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205029 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205029 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205029 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205029 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205029 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205029 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205029 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205029 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205029 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205029 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205029 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205029 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205029 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205029 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-205029 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [25812b64-b671-4f7a-ac1e-355287db2861] Pending
helpers_test.go:344: "task-pv-pod-restore" [25812b64-b671-4f7a-ac1e-355287db2861] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [25812b64-b671-4f7a-ac1e-355287db2861] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.002677823s
addons_test.go:570: (dbg) Run:  kubectl --context addons-205029 delete pod task-pv-pod-restore
addons_test.go:574: (dbg) Run:  kubectl --context addons-205029 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-205029 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-amd64 -p addons-205029 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-amd64 -p addons-205029 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.449429102s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-amd64 -p addons-205029 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (52.88s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.35s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-205029 --alsologtostderr -v=1
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-nk79k" [0b22e90b-78b2-4afd-8e8e-5be93885b563] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-nk79k" [0b22e90b-78b2-4afd-8e8e-5be93885b563] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003910015s
addons_test.go:777: (dbg) Run:  out/minikube-linux-amd64 -p addons-205029 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-linux-amd64 -p addons-205029 addons disable headlamp --alsologtostderr -v=1: (5.652316467s)
--- PASS: TestAddons/parallel/Headlamp (17.35s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.45s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-99xdt" [f4d9d9ba-aeae-4615-8d7b-aaa694543164] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003194416s
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-205029
--- PASS: TestAddons/parallel/CloudSpanner (5.45s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.36s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-205029 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-205029 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205029 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205029 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205029 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205029 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205029 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205029 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [765c24d1-9716-4c95-9779-0cd9ab191c74] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [765c24d1-9716-4c95-9779-0cd9ab191c74] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [765c24d1-9716-4c95-9779-0cd9ab191c74] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003786594s
addons_test.go:938: (dbg) Run:  kubectl --context addons-205029 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-linux-amd64 -p addons-205029 ssh "cat /opt/local-path-provisioner/pvc-d6bd4afe-8bba-4f86-86d7-a230517a8194_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-205029 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-205029 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-linux-amd64 -p addons-205029 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:967: (dbg) Done: out/minikube-linux-amd64 -p addons-205029 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.424030034s)
--- PASS: TestAddons/parallel/LocalPath (54.36s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.41s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-xpzd9" [caf9d40a-dff4-4e28-b6c7-d185e6e30b5a] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004361626s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-205029
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.41s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.6s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-rf8nn" [74b1a2e8-8ac3-4d88-9b00-68c6479f3647] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.00431204s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-amd64 -p addons-205029 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-amd64 -p addons-205029 addons disable yakd --alsologtostderr -v=1: (5.59842822s)
--- PASS: TestAddons/parallel/Yakd (10.60s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (5.85s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-205029
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-205029: (5.620680865s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-205029
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-205029
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-205029
--- PASS: TestAddons/StoppedEnableDisable (5.85s)

                                                
                                    
x
+
TestCertOptions (25.99s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-459884 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
E0920 17:33:30.605837   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/skaffold-342972/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:33:30.612316   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/skaffold-342972/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:33:30.627681   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/skaffold-342972/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:33:30.649103   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/skaffold-342972/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:33:30.690960   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/skaffold-342972/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:33:30.772927   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/skaffold-342972/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:33:30.934886   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/skaffold-342972/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:33:31.256557   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/skaffold-342972/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-459884 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (23.217734244s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-459884 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-459884 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-459884 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-459884" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-459884
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-459884: (2.181548179s)
--- PASS: TestCertOptions (25.99s)

                                                
                                    
x
+
TestCertExpiration (231.36s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-987336 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-987336 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (24.675749072s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-987336 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-987336 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (24.491145987s)
helpers_test.go:175: Cleaning up "cert-expiration-987336" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-987336
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-987336: (2.187172576s)
--- PASS: TestCertExpiration (231.36s)

                                                
                                    
x
+
TestDockerFlags (36.18s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-552738 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-552738 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (33.324976981s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-552738 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-552738 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-552738" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-552738
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-552738: (2.102826459s)
--- PASS: TestDockerFlags (36.18s)

                                                
                                    
x
+
TestForceSystemdFlag (37.19s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-656252 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-656252 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (34.706635967s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-656252 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-656252" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-656252
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-656252: (2.162750731s)
--- PASS: TestForceSystemdFlag (37.19s)

                                                
                                    
x
+
TestForceSystemdEnv (26.04s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-567519 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-567519 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (23.58921874s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-567519 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-567519" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-567519
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-567519: (2.152211077s)
--- PASS: TestForceSystemdEnv (26.04s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.52s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0920 17:29:34.167245   15398 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0920 17:29:34.167417   15398 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/Docker_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0920 17:29:34.211558   15398 install.go:62] docker-machine-driver-kvm2: exit status 1
W0920 17:29:34.211912   15398 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0920 17:29:34.211961   15398 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1414948815/001/docker-machine-driver-kvm2
I0920 17:29:34.451163   15398 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1414948815/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640] Decompressors:map[bz2:0xc000813b30 gz:0xc000813b38 tar:0xc000813ae0 tar.bz2:0xc000813af0 tar.gz:0xc000813b00 tar.xz:0xc000813b10 tar.zst:0xc000813b20 tbz2:0xc000813af0 tgz:0xc000813b00 txz:0xc000813b10 tzst:0xc000813b20 xz:0xc000813b40 zip:0xc000813b50 zst:0xc000813b48] Getters:map[file:0xc001cd0f60 http:0xc00079a460 https:0xc00079a4b0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0920 17:29:34.451221   15398 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1414948815/001/docker-machine-driver-kvm2
I0920 17:29:36.709211   15398 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0920 17:29:36.709293   15398 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/Docker_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0920 17:29:36.741261   15398 install.go:137] /home/jenkins/workspace/Docker_Linux_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0920 17:29:36.741292   15398 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0920 17:29:36.741397   15398 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0920 17:29:36.741445   15398 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1414948815/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (4.52s)

                                                
                                    
x
+
TestErrorSpam/setup (21.28s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-580417 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-580417 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-580417 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-580417 --driver=docker  --container-runtime=docker: (21.278729309s)
--- PASS: TestErrorSpam/setup (21.28s)

                                                
                                    
x
+
TestErrorSpam/start (0.55s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-580417 --log_dir /tmp/nospam-580417 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-580417 --log_dir /tmp/nospam-580417 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-580417 --log_dir /tmp/nospam-580417 start --dry-run
--- PASS: TestErrorSpam/start (0.55s)

                                                
                                    
x
+
TestErrorSpam/status (0.85s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-580417 --log_dir /tmp/nospam-580417 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-580417 --log_dir /tmp/nospam-580417 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-580417 --log_dir /tmp/nospam-580417 status
--- PASS: TestErrorSpam/status (0.85s)

                                                
                                    
x
+
TestErrorSpam/pause (1.13s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-580417 --log_dir /tmp/nospam-580417 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-580417 --log_dir /tmp/nospam-580417 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-580417 --log_dir /tmp/nospam-580417 pause
--- PASS: TestErrorSpam/pause (1.13s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.35s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-580417 --log_dir /tmp/nospam-580417 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-580417 --log_dir /tmp/nospam-580417 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-580417 --log_dir /tmp/nospam-580417 unpause
--- PASS: TestErrorSpam/unpause (1.35s)

                                                
                                    
x
+
TestErrorSpam/stop (10.84s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-580417 --log_dir /tmp/nospam-580417 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-580417 --log_dir /tmp/nospam-580417 stop: (10.670608384s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-580417 --log_dir /tmp/nospam-580417 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-580417 --log_dir /tmp/nospam-580417 stop
--- PASS: TestErrorSpam/stop (10.84s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19672-8616/.minikube/files/etc/test/nested/copy/15398/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (35.69s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-796375 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-796375 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (35.690728012s)
--- PASS: TestFunctional/serial/StartWithProxy (35.69s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (33.65s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0920 16:59:01.215797   15398 config.go:182] Loaded profile config "functional-796375": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-796375 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-796375 --alsologtostderr -v=8: (33.646554344s)
functional_test.go:663: soft start took 33.647298846s for "functional-796375" cluster.
I0920 16:59:34.862717   15398 config.go:182] Loaded profile config "functional-796375": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (33.65s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-796375 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.43s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.43s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-796375 /tmp/TestFunctionalserialCacheCmdcacheadd_local453010969/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 cache add minikube-local-cache-test:functional-796375
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-796375 cache add minikube-local-cache-test:functional-796375: (1.116305766s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 cache delete minikube-local-cache-test:functional-796375
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-796375
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.45s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-796375 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (261.024034ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 kubectl -- --context functional-796375 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-796375 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.08s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-796375 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-796375 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.077306314s)
functional_test.go:761: restart took 41.077414951s for "functional-796375" cluster.
I0920 17:00:21.846841   15398 config.go:182] Loaded profile config "functional-796375": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (41.08s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-796375 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.99s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 logs
--- PASS: TestFunctional/serial/LogsCmd (0.99s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.01s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 logs --file /tmp/TestFunctionalserialLogsFileCmd1692066078/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-796375 logs --file /tmp/TestFunctionalserialLogsFileCmd1692066078/001/logs.txt: (1.006080693s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.01s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.23s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-796375 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-796375
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-796375: exit status 115 (320.755449ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31060 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-796375 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.23s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-796375 config get cpus: exit status 14 (71.106333ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-796375 config get cpus: exit status 14 (45.962964ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-796375 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-796375 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 68558: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.44s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-796375 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-796375 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (182.754644ms)

                                                
                                                
-- stdout --
	* [functional-796375] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19672
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19672-8616/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8616/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 17:00:42.208247   67722 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:00:42.208646   67722 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:00:42.208721   67722 out.go:358] Setting ErrFile to fd 2...
	I0920 17:00:42.208735   67722 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:00:42.209101   67722 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8616/.minikube/bin
	I0920 17:00:42.210039   67722 out.go:352] Setting JSON to false
	I0920 17:00:42.211780   67722 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":2586,"bootTime":1726849056,"procs":262,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 17:00:42.211892   67722 start.go:139] virtualization: kvm guest
	I0920 17:00:42.214521   67722 out.go:177] * [functional-796375] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 17:00:42.216385   67722 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 17:00:42.216450   67722 notify.go:220] Checking for updates...
	I0920 17:00:42.219240   67722 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 17:00:42.220742   67722 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-8616/kubeconfig
	I0920 17:00:42.222170   67722 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8616/.minikube
	I0920 17:00:42.223597   67722 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 17:00:42.224971   67722 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 17:00:42.226730   67722 config.go:182] Loaded profile config "functional-796375": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 17:00:42.227203   67722 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 17:00:42.250105   67722 docker.go:123] docker version: linux-27.3.0:Docker Engine - Community
	I0920 17:00:42.250298   67722 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 17:00:42.320129   67722 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-20 17:00:42.308661379 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0920 17:00:42.320250   67722 docker.go:318] overlay module found
	I0920 17:00:42.322453   67722 out.go:177] * Using the docker driver based on existing profile
	I0920 17:00:42.323989   67722 start.go:297] selected driver: docker
	I0920 17:00:42.324010   67722 start.go:901] validating driver "docker" against &{Name:functional-796375 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-796375 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:00:42.324145   67722 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 17:00:42.326789   67722 out.go:201] 
	W0920 17:00:42.328315   67722 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0920 17:00:42.329770   67722 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-796375 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-796375 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-796375 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (165.986955ms)

                                                
                                                
-- stdout --
	* [functional-796375] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19672
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19672-8616/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8616/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 17:00:42.617918   68046 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:00:42.618123   68046 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:00:42.618136   68046 out.go:358] Setting ErrFile to fd 2...
	I0920 17:00:42.618148   68046 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:00:42.618631   68046 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8616/.minikube/bin
	I0920 17:00:42.619507   68046 out.go:352] Setting JSON to false
	I0920 17:00:42.620955   68046 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":2587,"bootTime":1726849056,"procs":263,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 17:00:42.621094   68046 start.go:139] virtualization: kvm guest
	I0920 17:00:42.623774   68046 out.go:177] * [functional-796375] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0920 17:00:42.625567   68046 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 17:00:42.625621   68046 notify.go:220] Checking for updates...
	I0920 17:00:42.628686   68046 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 17:00:42.630813   68046 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-8616/kubeconfig
	I0920 17:00:42.632792   68046 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8616/.minikube
	I0920 17:00:42.634384   68046 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 17:00:42.635957   68046 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 17:00:42.637822   68046 config.go:182] Loaded profile config "functional-796375": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 17:00:42.638384   68046 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 17:00:42.664441   68046 docker.go:123] docker version: linux-27.3.0:Docker Engine - Community
	I0920 17:00:42.664527   68046 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 17:00:42.715473   68046 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-20 17:00:42.704914261 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0920 17:00:42.715624   68046 docker.go:318] overlay module found
	I0920 17:00:42.718418   68046 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0920 17:00:42.719774   68046 start.go:297] selected driver: docker
	I0920 17:00:42.719791   68046 start.go:901] validating driver "docker" against &{Name:functional-796375 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-796375 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:00:42.719927   68046 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 17:00:42.722238   68046 out.go:201] 
	W0920 17:00:42.723605   68046 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0920 17:00:42.725087   68046 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-796375 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-796375 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-bbfcb" [93fdd423-ffb5-4ea7-b4ef-c08da2a9ed7a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-bbfcb" [93fdd423-ffb5-4ea7-b4ef-c08da2a9ed7a] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.002963052s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:30486
functional_test.go:1675: http://192.168.49.2:30486: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-bbfcb

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30486
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.57s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (44.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [09e8e119-a5fa-42f8-a6e4-59c57c5291c0] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.01169834s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-796375 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-796375 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-796375 get pvc myclaim -o=json
I0920 17:00:34.549477   15398 retry.go:31] will retry after 2.51675923s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:4438592b-5222-4e70-b9c5-c8bef504721b ResourceVersion:700 Generation:0 CreationTimestamp:2024-09-20 17:00:34 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName:pvc-4438592b-5222-4e70-b9c5-c8bef504721b StorageClassName:0xc001844540 VolumeMode:0xc001844550 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-796375 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-796375 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [aa2f262f-2573-4528-bd1f-754d579d52d0] Pending
helpers_test.go:344: "sp-pod" [aa2f262f-2573-4528-bd1f-754d579d52d0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [aa2f262f-2573-4528-bd1f-754d579d52d0] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.013525838s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-796375 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-796375 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-796375 delete -f testdata/storage-provisioner/pod.yaml: (1.301633246s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-796375 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [651ba6c2-9d37-4262-8401-c195ab984361] Pending
helpers_test.go:344: "sp-pod" [651ba6c2-9d37-4262-8401-c195ab984361] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [651ba6c2-9d37-4262-8401-c195ab984361] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 21.004112713s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-796375 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (44.91s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 ssh -n functional-796375 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 cp functional-796375:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3080322350/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 ssh -n functional-796375 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 ssh -n functional-796375 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-796375 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-kvk6r" [d6af3d30-4feb-41a8-ad94-36f2cc6f59fd] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-kvk6r" [d6af3d30-4feb-41a8-ad94-36f2cc6f59fd] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.003651298s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-796375 exec mysql-6cdb49bbb-kvk6r -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-796375 exec mysql-6cdb49bbb-kvk6r -- mysql -ppassword -e "show databases;": exit status 1 (116.580991ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0920 17:01:11.774727   15398 retry.go:31] will retry after 1.454600204s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-796375 exec mysql-6cdb49bbb-kvk6r -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-796375 exec mysql-6cdb49bbb-kvk6r -- mysql -ppassword -e "show databases;": exit status 1 (106.151627ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0920 17:01:13.336265   15398 retry.go:31] will retry after 2.016880742s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-796375 exec mysql-6cdb49bbb-kvk6r -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.05s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/15398/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 ssh "sudo cat /etc/test/nested/copy/15398/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/15398.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 ssh "sudo cat /etc/ssl/certs/15398.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/15398.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 ssh "sudo cat /usr/share/ca-certificates/15398.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/153982.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 ssh "sudo cat /etc/ssl/certs/153982.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/153982.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 ssh "sudo cat /usr/share/ca-certificates/153982.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.86s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-796375 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-796375 ssh "sudo systemctl is-active crio": exit status 1 (265.897381ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-796375 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-796375 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-478hv" [98abda4f-2a7c-4d3e-816c-7020aaae64ca] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-478hv" [98abda4f-2a7c-4d3e-816c-7020aaae64ca] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.003745893s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.20s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-796375 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-796375 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-796375 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 63596: os: process already finished
helpers_test.go:508: unable to kill pid 63347: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-796375 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-796375 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-796375 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [9ec9002e-6362-4fbf-b694-6c144eb85799] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [9ec9002e-6362-4fbf-b694-6c144eb85799] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 12.004414838s
I0920 17:00:41.763470   15398 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.26s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "316.389066ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "45.595365ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "301.844389ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "47.532968ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-796375 /tmp/TestFunctionalparallelMountCmdany-port2837606478/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726851632176246787" to /tmp/TestFunctionalparallelMountCmdany-port2837606478/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726851632176246787" to /tmp/TestFunctionalparallelMountCmdany-port2837606478/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726851632176246787" to /tmp/TestFunctionalparallelMountCmdany-port2837606478/001/test-1726851632176246787
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-796375 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (264.244342ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0920 17:00:32.440803   15398 retry.go:31] will retry after 437.404883ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 20 17:00 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 20 17:00 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 20 17:00 test-1726851632176246787
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 ssh cat /mount-9p/test-1726851632176246787
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-796375 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [78aa9367-05fb-4f42-b3d3-5c96e7f6fa70] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [78aa9367-05fb-4f42-b3d3-5c96e7f6fa70] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [78aa9367-05fb-4f42-b3d3-5c96e7f6fa70] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003039689s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-796375 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-796375 /tmp/TestFunctionalparallelMountCmdany-port2837606478/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 service list -o json
functional_test.go:1494: Took "298.063254ms" to run "out/minikube-linux-amd64 -p functional-796375 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:31817
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31817
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-796375 /tmp/TestFunctionalparallelMountCmdspecific-port1955480031/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-796375 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (390.954774ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0920 17:00:40.102670   15398 retry.go:31] will retry after 482.086635ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-796375 /tmp/TestFunctionalparallelMountCmdspecific-port1955480031/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-796375 ssh "sudo umount -f /mount-9p": exit status 1 (297.291383ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-796375 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-796375 /tmp/TestFunctionalparallelMountCmdspecific-port1955480031/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.96s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-796375 /tmp/TestFunctionalparallelMountCmdVerifyCleanup495297685/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-796375 /tmp/TestFunctionalparallelMountCmdVerifyCleanup495297685/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-796375 /tmp/TestFunctionalparallelMountCmdVerifyCleanup495297685/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-796375 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-796375 /tmp/TestFunctionalparallelMountCmdVerifyCleanup495297685/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-796375 /tmp/TestFunctionalparallelMountCmdVerifyCleanup495297685/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-796375 /tmp/TestFunctionalparallelMountCmdVerifyCleanup495297685/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-796375 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.25.3 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-796375 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-796375 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-796375
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-796375
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-796375 image ls --format short --alsologtostderr:
I0920 17:00:53.513440   71421 out.go:345] Setting OutFile to fd 1 ...
I0920 17:00:53.513578   71421 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 17:00:53.513589   71421 out.go:358] Setting ErrFile to fd 2...
I0920 17:00:53.513595   71421 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 17:00:53.513914   71421 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8616/.minikube/bin
I0920 17:00:53.514791   71421 config.go:182] Loaded profile config "functional-796375": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 17:00:53.514933   71421 config.go:182] Loaded profile config "functional-796375": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 17:00:53.515388   71421 cli_runner.go:164] Run: docker container inspect functional-796375 --format={{.State.Status}}
I0920 17:00:53.537493   71421 ssh_runner.go:195] Run: systemctl --version
I0920 17:00:53.537556   71421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-796375
I0920 17:00:53.556881   71421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/functional-796375/id_rsa Username:docker}
I0920 17:00:53.647409   71421 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-796375 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-apiserver              | v1.31.1           | 6bab7719df100 | 94.2MB |
| docker.io/library/nginx                     | alpine            | c7b4f26a7d93f | 43.2MB |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 9aa1fad941575 | 67.4MB |
| docker.io/kicbase/echo-server               | functional-796375 | 9056ab77afb8e | 4.94MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| docker.io/library/minikube-local-cache-test | functional-796375 | f666f3b785639 | 30B    |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 60c005f310ff3 | 91.5MB |
| registry.k8s.io/coredns/coredns             | v1.11.3           | c69fa2e9cbf5f | 61.8MB |
| localhost/my-image                          | functional-796375 | a04470a321b7f | 1.24MB |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 175ffd71cce3d | 88.4MB |
| docker.io/library/nginx                     | latest            | 39286ab8a5e14 | 188MB  |
| registry.k8s.io/etcd                        | 3.5.15-0          | 2e96e5913fc06 | 148MB  |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-796375 image ls --format table --alsologtostderr:
I0920 17:00:58.228468   71961 out.go:345] Setting OutFile to fd 1 ...
I0920 17:00:58.228592   71961 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 17:00:58.228603   71961 out.go:358] Setting ErrFile to fd 2...
I0920 17:00:58.228610   71961 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 17:00:58.228922   71961 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8616/.minikube/bin
I0920 17:00:58.229782   71961 config.go:182] Loaded profile config "functional-796375": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 17:00:58.229925   71961 config.go:182] Loaded profile config "functional-796375": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 17:00:58.230603   71961 cli_runner.go:164] Run: docker container inspect functional-796375 --format={{.State.Status}}
I0920 17:00:58.254071   71961 ssh_runner.go:195] Run: systemctl --version
I0920 17:00:58.254138   71961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-796375
I0920 17:00:58.273796   71961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/functional-796375/id_rsa Username:docker}
I0920 17:00:58.371198   71961 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-796375 image ls --format json --alsologtostderr:
[{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-796375"],"size":"4940000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"a04470a321b7fede9c447b4e299339eacb2f51ed110d97e7be6b7aaff6e14e9a","repoDigests":[],"repoTags":["localhost/my-image:functional-796375"],"size":"1240000"},{"id":"f666f3b785639e21a17f7738e06ac6a92667961b037e20e20061d27d32a7b1fd","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-796375"],"size":"30"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"94200000"},{"id":"175ffd71cce3d90bae959
04b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"88400000"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"91500000"},{"id":"c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43200000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"67400000"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":[],"repoTags":["r
egistry.k8s.io/coredns/coredns:v1.11.3"],"size":"61800000"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"148000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],
"size":"683000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-796375 image ls --format json --alsologtostderr:
I0920 17:00:57.956176   71880 out.go:345] Setting OutFile to fd 1 ...
I0920 17:00:57.956311   71880 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 17:00:57.956323   71880 out.go:358] Setting ErrFile to fd 2...
I0920 17:00:57.956329   71880 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 17:00:57.956623   71880 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8616/.minikube/bin
I0920 17:00:57.957550   71880 config.go:182] Loaded profile config "functional-796375": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 17:00:57.957790   71880 config.go:182] Loaded profile config "functional-796375": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 17:00:57.958471   71880 cli_runner.go:164] Run: docker container inspect functional-796375 --format={{.State.Status}}
I0920 17:00:57.979448   71880 ssh_runner.go:195] Run: systemctl --version
I0920 17:00:57.979509   71880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-796375
I0920 17:00:58.000839   71880 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/functional-796375/id_rsa Username:docker}
I0920 17:00:58.124150   71880 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
W0920 17:00:58.166282   71880 root.go:91] failed to log command end to audit: failed to find a log row with id equals to 403267a0-59f2-4bfc-8cf9-9f358a268834
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-796375 image ls --format yaml --alsologtostderr:
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "67400000"
- id: c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43200000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-796375
size: "4940000"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "148000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "94200000"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "88400000"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61800000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "91500000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: f666f3b785639e21a17f7738e06ac6a92667961b037e20e20061d27d32a7b1fd
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-796375
size: "30"
- id: 39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-796375 image ls --format yaml --alsologtostderr:
I0920 17:00:53.724959   71520 out.go:345] Setting OutFile to fd 1 ...
I0920 17:00:53.725230   71520 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 17:00:53.725240   71520 out.go:358] Setting ErrFile to fd 2...
I0920 17:00:53.725245   71520 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 17:00:53.725491   71520 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8616/.minikube/bin
I0920 17:00:53.726475   71520 config.go:182] Loaded profile config "functional-796375": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 17:00:53.726639   71520 config.go:182] Loaded profile config "functional-796375": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 17:00:53.727169   71520 cli_runner.go:164] Run: docker container inspect functional-796375 --format={{.State.Status}}
I0920 17:00:53.744843   71520 ssh_runner.go:195] Run: systemctl --version
I0920 17:00:53.744892   71520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-796375
I0920 17:00:53.763696   71520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/functional-796375/id_rsa Username:docker}
I0920 17:00:53.855183   71520 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-796375 ssh pgrep buildkitd: exit status 1 (236.622567ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 image build -t localhost/my-image:functional-796375 testdata/build --alsologtostderr
2024/09/20 17:00:57 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-796375 image build -t localhost/my-image:functional-796375 testdata/build --alsologtostderr: (4.046403854s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-796375 image build -t localhost/my-image:functional-796375 testdata/build --alsologtostderr:
I0920 17:00:54.166203   71670 out.go:345] Setting OutFile to fd 1 ...
I0920 17:00:54.166343   71670 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 17:00:54.166354   71670 out.go:358] Setting ErrFile to fd 2...
I0920 17:00:54.166358   71670 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 17:00:54.166535   71670 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8616/.minikube/bin
I0920 17:00:54.167237   71670 config.go:182] Loaded profile config "functional-796375": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 17:00:54.167928   71670 config.go:182] Loaded profile config "functional-796375": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 17:00:54.168369   71670 cli_runner.go:164] Run: docker container inspect functional-796375 --format={{.State.Status}}
I0920 17:00:54.185257   71670 ssh_runner.go:195] Run: systemctl --version
I0920 17:00:54.185302   71670 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-796375
I0920 17:00:54.200833   71670 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/functional-796375/id_rsa Username:docker}
I0920 17:00:54.295354   71670 build_images.go:161] Building image from path: /tmp/build.493391816.tar
I0920 17:00:54.295421   71670 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0920 17:00:54.303977   71670 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.493391816.tar
I0920 17:00:54.307061   71670 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.493391816.tar: stat -c "%s %y" /var/lib/minikube/build/build.493391816.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.493391816.tar': No such file or directory
I0920 17:00:54.307095   71670 ssh_runner.go:362] scp /tmp/build.493391816.tar --> /var/lib/minikube/build/build.493391816.tar (3072 bytes)
I0920 17:00:54.329138   71670 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.493391816
I0920 17:00:54.336919   71670 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.493391816 -xf /var/lib/minikube/build/build.493391816.tar
I0920 17:00:54.345045   71670 docker.go:360] Building image: /var/lib/minikube/build/build.493391816
I0920 17:00:54.345104   71670 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-796375 /var/lib/minikube/build/build.493391816
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.7s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.6s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa done
#5 DONE 0.8s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.9s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:a04470a321b7fede9c447b4e299339eacb2f51ed110d97e7be6b7aaff6e14e9a done
#8 naming to localhost/my-image:functional-796375 done
#8 DONE 0.1s
I0920 17:00:58.126794   71670 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-796375 /var/lib/minikube/build/build.493391816: (3.781663312s)
I0920 17:00:58.126857   71670 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.493391816
I0920 17:00:58.147589   71670 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.493391816.tar
I0920 17:00:58.158686   71670 build_images.go:217] Built localhost/my-image:functional-796375 from /tmp/build.493391816.tar
I0920 17:00:58.158723   71670 build_images.go:133] succeeded building to: functional-796375
I0920 17:00:58.158729   71670 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.877030909s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-796375
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 image load --daemon kicbase/echo-server:functional-796375 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 image load --daemon kicbase/echo-server:functional-796375 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-796375
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 image load --daemon kicbase/echo-server:functional-796375 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 image save kicbase/echo-server:functional-796375 /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 image rm kicbase/echo-server:functional-796375 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 image load /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-796375 docker-env) && out/minikube-linux-amd64 status -p functional-796375"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-796375 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-796375
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 image save --daemon kicbase/echo-server:functional-796375 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-796375
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-796375 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-796375
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-796375
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-796375
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (101.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-846577 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0920 17:02:43.763770   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:02:43.770352   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:02:43.781823   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:02:43.803171   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:02:43.844764   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:02:43.926141   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:02:44.087455   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:02:44.409272   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:02:45.051061   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:02:46.332851   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:02:48.894283   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:02:54.016219   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-846577 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m41.053169029s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (101.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-846577 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-846577 -- rollout status deployment/busybox
E0920 17:03:04.258408   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-846577 -- rollout status deployment/busybox: (4.341167745s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-846577 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-846577 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-846577 -- exec busybox-7dff88458-84kck -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-846577 -- exec busybox-7dff88458-ld9lb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-846577 -- exec busybox-7dff88458-sf2kk -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-846577 -- exec busybox-7dff88458-84kck -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-846577 -- exec busybox-7dff88458-ld9lb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-846577 -- exec busybox-7dff88458-sf2kk -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-846577 -- exec busybox-7dff88458-84kck -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-846577 -- exec busybox-7dff88458-ld9lb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-846577 -- exec busybox-7dff88458-sf2kk -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-846577 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-846577 -- exec busybox-7dff88458-84kck -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-846577 -- exec busybox-7dff88458-84kck -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-846577 -- exec busybox-7dff88458-ld9lb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-846577 -- exec busybox-7dff88458-ld9lb -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-846577 -- exec busybox-7dff88458-sf2kk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-846577 -- exec busybox-7dff88458-sf2kk -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (20.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-846577 -v=7 --alsologtostderr
E0920 17:03:24.740016   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-846577 -v=7 --alsologtostderr: (19.637519715s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (20.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-846577 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (15.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 cp testdata/cp-test.txt ha-846577:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 ssh -n ha-846577 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 cp ha-846577:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3092602763/001/cp-test_ha-846577.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 ssh -n ha-846577 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 cp ha-846577:/home/docker/cp-test.txt ha-846577-m02:/home/docker/cp-test_ha-846577_ha-846577-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 ssh -n ha-846577 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 ssh -n ha-846577-m02 "sudo cat /home/docker/cp-test_ha-846577_ha-846577-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 cp ha-846577:/home/docker/cp-test.txt ha-846577-m03:/home/docker/cp-test_ha-846577_ha-846577-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 ssh -n ha-846577 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 ssh -n ha-846577-m03 "sudo cat /home/docker/cp-test_ha-846577_ha-846577-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 cp ha-846577:/home/docker/cp-test.txt ha-846577-m04:/home/docker/cp-test_ha-846577_ha-846577-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 ssh -n ha-846577 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 ssh -n ha-846577-m04 "sudo cat /home/docker/cp-test_ha-846577_ha-846577-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 cp testdata/cp-test.txt ha-846577-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 ssh -n ha-846577-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 cp ha-846577-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3092602763/001/cp-test_ha-846577-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 ssh -n ha-846577-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 cp ha-846577-m02:/home/docker/cp-test.txt ha-846577:/home/docker/cp-test_ha-846577-m02_ha-846577.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 ssh -n ha-846577-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 ssh -n ha-846577 "sudo cat /home/docker/cp-test_ha-846577-m02_ha-846577.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 cp ha-846577-m02:/home/docker/cp-test.txt ha-846577-m03:/home/docker/cp-test_ha-846577-m02_ha-846577-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 ssh -n ha-846577-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 ssh -n ha-846577-m03 "sudo cat /home/docker/cp-test_ha-846577-m02_ha-846577-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 cp ha-846577-m02:/home/docker/cp-test.txt ha-846577-m04:/home/docker/cp-test_ha-846577-m02_ha-846577-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 ssh -n ha-846577-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 ssh -n ha-846577-m04 "sudo cat /home/docker/cp-test_ha-846577-m02_ha-846577-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 cp testdata/cp-test.txt ha-846577-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 ssh -n ha-846577-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 cp ha-846577-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3092602763/001/cp-test_ha-846577-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 ssh -n ha-846577-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 cp ha-846577-m03:/home/docker/cp-test.txt ha-846577:/home/docker/cp-test_ha-846577-m03_ha-846577.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 ssh -n ha-846577-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 ssh -n ha-846577 "sudo cat /home/docker/cp-test_ha-846577-m03_ha-846577.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 cp ha-846577-m03:/home/docker/cp-test.txt ha-846577-m02:/home/docker/cp-test_ha-846577-m03_ha-846577-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 ssh -n ha-846577-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 ssh -n ha-846577-m02 "sudo cat /home/docker/cp-test_ha-846577-m03_ha-846577-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 cp ha-846577-m03:/home/docker/cp-test.txt ha-846577-m04:/home/docker/cp-test_ha-846577-m03_ha-846577-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 ssh -n ha-846577-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 ssh -n ha-846577-m04 "sudo cat /home/docker/cp-test_ha-846577-m03_ha-846577-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 cp testdata/cp-test.txt ha-846577-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 ssh -n ha-846577-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 cp ha-846577-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3092602763/001/cp-test_ha-846577-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 ssh -n ha-846577-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 cp ha-846577-m04:/home/docker/cp-test.txt ha-846577:/home/docker/cp-test_ha-846577-m04_ha-846577.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 ssh -n ha-846577-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 ssh -n ha-846577 "sudo cat /home/docker/cp-test_ha-846577-m04_ha-846577.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 cp ha-846577-m04:/home/docker/cp-test.txt ha-846577-m02:/home/docker/cp-test_ha-846577-m04_ha-846577-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 ssh -n ha-846577-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 ssh -n ha-846577-m02 "sudo cat /home/docker/cp-test_ha-846577-m04_ha-846577-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 cp ha-846577-m04:/home/docker/cp-test.txt ha-846577-m03:/home/docker/cp-test_ha-846577-m04_ha-846577-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 ssh -n ha-846577-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 ssh -n ha-846577-m03 "sudo cat /home/docker/cp-test_ha-846577-m04_ha-846577-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (15.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-846577 node stop m02 -v=7 --alsologtostderr: (10.693067508s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-846577 status -v=7 --alsologtostderr: exit status 7 (665.965622ms)

                                                
                                                
-- stdout --
	ha-846577
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-846577-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-846577-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-846577-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 17:03:55.270639   99611 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:03:55.270762   99611 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:03:55.270771   99611 out.go:358] Setting ErrFile to fd 2...
	I0920 17:03:55.270777   99611 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:03:55.271025   99611 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8616/.minikube/bin
	I0920 17:03:55.271248   99611 out.go:352] Setting JSON to false
	I0920 17:03:55.271283   99611 mustload.go:65] Loading cluster: ha-846577
	I0920 17:03:55.271376   99611 notify.go:220] Checking for updates...
	I0920 17:03:55.271777   99611 config.go:182] Loaded profile config "ha-846577": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 17:03:55.271797   99611 status.go:174] checking status of ha-846577 ...
	I0920 17:03:55.272251   99611 cli_runner.go:164] Run: docker container inspect ha-846577 --format={{.State.Status}}
	I0920 17:03:55.291463   99611 status.go:364] ha-846577 host status = "Running" (err=<nil>)
	I0920 17:03:55.291499   99611 host.go:66] Checking if "ha-846577" exists ...
	I0920 17:03:55.291826   99611 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-846577
	I0920 17:03:55.310216   99611 host.go:66] Checking if "ha-846577" exists ...
	I0920 17:03:55.310453   99611 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 17:03:55.310488   99611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-846577
	I0920 17:03:55.329276   99611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/ha-846577/id_rsa Username:docker}
	I0920 17:03:55.420112   99611 ssh_runner.go:195] Run: systemctl --version
	I0920 17:03:55.423943   99611 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 17:03:55.434502   99611 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 17:03:55.484404   99611 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:54 OomKillDisable:true NGoroutines:72 SystemTime:2024-09-20 17:03:55.474871051 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0920 17:03:55.485087   99611 kubeconfig.go:125] found "ha-846577" server: "https://192.168.49.254:8443"
	I0920 17:03:55.485118   99611 api_server.go:166] Checking apiserver status ...
	I0920 17:03:55.485151   99611 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 17:03:55.497001   99611 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2422/cgroup
	I0920 17:03:55.506257   99611 api_server.go:182] apiserver freezer: "4:freezer:/docker/a581d19d7981605ac4e8618ff2009c5dce693dfccdaad187c5282ee3745cf529/kubepods/burstable/pod5363f5fb05b671c164bd799652723f20/22b32038bcf28e4d43a64d3e17f22eec8b5098c608f452c83db56cfd80a03c93"
	I0920 17:03:55.506326   99611 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a581d19d7981605ac4e8618ff2009c5dce693dfccdaad187c5282ee3745cf529/kubepods/burstable/pod5363f5fb05b671c164bd799652723f20/22b32038bcf28e4d43a64d3e17f22eec8b5098c608f452c83db56cfd80a03c93/freezer.state
	I0920 17:03:55.514288   99611 api_server.go:204] freezer state: "THAWED"
	I0920 17:03:55.514318   99611 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0920 17:03:55.519404   99611 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0920 17:03:55.519429   99611 status.go:456] ha-846577 apiserver status = Running (err=<nil>)
	I0920 17:03:55.519439   99611 status.go:176] ha-846577 status: &{Name:ha-846577 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 17:03:55.519455   99611 status.go:174] checking status of ha-846577-m02 ...
	I0920 17:03:55.519707   99611 cli_runner.go:164] Run: docker container inspect ha-846577-m02 --format={{.State.Status}}
	I0920 17:03:55.537497   99611 status.go:364] ha-846577-m02 host status = "Stopped" (err=<nil>)
	I0920 17:03:55.537521   99611 status.go:377] host is not running, skipping remaining checks
	I0920 17:03:55.537529   99611 status.go:176] ha-846577-m02 status: &{Name:ha-846577-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 17:03:55.537552   99611 status.go:174] checking status of ha-846577-m03 ...
	I0920 17:03:55.537878   99611 cli_runner.go:164] Run: docker container inspect ha-846577-m03 --format={{.State.Status}}
	I0920 17:03:55.557004   99611 status.go:364] ha-846577-m03 host status = "Running" (err=<nil>)
	I0920 17:03:55.557026   99611 host.go:66] Checking if "ha-846577-m03" exists ...
	I0920 17:03:55.557264   99611 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-846577-m03
	I0920 17:03:55.574725   99611 host.go:66] Checking if "ha-846577-m03" exists ...
	I0920 17:03:55.575049   99611 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 17:03:55.575087   99611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-846577-m03
	I0920 17:03:55.592482   99611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/ha-846577-m03/id_rsa Username:docker}
	I0920 17:03:55.688318   99611 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 17:03:55.699309   99611 kubeconfig.go:125] found "ha-846577" server: "https://192.168.49.254:8443"
	I0920 17:03:55.699339   99611 api_server.go:166] Checking apiserver status ...
	I0920 17:03:55.699380   99611 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 17:03:55.710044   99611 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2218/cgroup
	I0920 17:03:55.719087   99611 api_server.go:182] apiserver freezer: "4:freezer:/docker/5a55d94bebd0b0cc032cb479160486ae7553914bc18f8db042bd9312d1c6fc0e/kubepods/burstable/pod773f7e51b29773a16fd1cbfbf0cb69a5/3a19d24f83a954223f7df875f268f56d03a3c78e08039eddc26d57ecbe5f0082"
	I0920 17:03:55.719142   99611 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/5a55d94bebd0b0cc032cb479160486ae7553914bc18f8db042bd9312d1c6fc0e/kubepods/burstable/pod773f7e51b29773a16fd1cbfbf0cb69a5/3a19d24f83a954223f7df875f268f56d03a3c78e08039eddc26d57ecbe5f0082/freezer.state
	I0920 17:03:55.727133   99611 api_server.go:204] freezer state: "THAWED"
	I0920 17:03:55.727168   99611 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0920 17:03:55.730820   99611 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0920 17:03:55.730875   99611 status.go:456] ha-846577-m03 apiserver status = Running (err=<nil>)
	I0920 17:03:55.730885   99611 status.go:176] ha-846577-m03 status: &{Name:ha-846577-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 17:03:55.730903   99611 status.go:174] checking status of ha-846577-m04 ...
	I0920 17:03:55.731196   99611 cli_runner.go:164] Run: docker container inspect ha-846577-m04 --format={{.State.Status}}
	I0920 17:03:55.748369   99611 status.go:364] ha-846577-m04 host status = "Running" (err=<nil>)
	I0920 17:03:55.748394   99611 host.go:66] Checking if "ha-846577-m04" exists ...
	I0920 17:03:55.748682   99611 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-846577-m04
	I0920 17:03:55.767175   99611 host.go:66] Checking if "ha-846577-m04" exists ...
	I0920 17:03:55.767472   99611 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 17:03:55.767508   99611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-846577-m04
	I0920 17:03:55.785274   99611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/ha-846577-m04/id_rsa Username:docker}
	I0920 17:03:55.880481   99611 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 17:03:55.892212   99611 status.go:176] ha-846577-m04 status: &{Name:ha-846577-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (23.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 node start m02 -v=7 --alsologtostderr
E0920 17:04:05.702189   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-846577 node start m02 -v=7 --alsologtostderr: (22.008871772s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-amd64 -p ha-846577 status -v=7 --alsologtostderr: (1.280452635s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (23.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.136753026s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (261.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-846577 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-846577 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-846577 -v=7 --alsologtostderr: (33.866364755s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-846577 --wait=true -v=7 --alsologtostderr
E0920 17:05:27.626911   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:05:28.134872   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/functional-796375/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:05:28.141234   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/functional-796375/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:05:28.152665   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/functional-796375/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:05:28.174044   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/functional-796375/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:05:28.215509   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/functional-796375/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:05:28.296913   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/functional-796375/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:05:28.458406   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/functional-796375/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:05:28.780050   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/functional-796375/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:05:29.422079   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/functional-796375/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:05:30.704207   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/functional-796375/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:05:33.266576   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/functional-796375/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:05:38.388245   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/functional-796375/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:05:48.630197   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/functional-796375/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:06:09.112250   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/functional-796375/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:06:50.074141   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/functional-796375/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:07:43.764176   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:08:11.469102   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:08:11.995477   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/functional-796375/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-846577 --wait=true -v=7 --alsologtostderr: (3m47.842949053s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-846577
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (261.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-846577 node delete m03 -v=7 --alsologtostderr: (8.588887423s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-846577 stop -v=7 --alsologtostderr: (32.345888737s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-846577 status -v=7 --alsologtostderr: exit status 7 (98.648352ms)

                                                
                                                
-- stdout --
	ha-846577
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-846577-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-846577-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 17:09:25.355175  130559 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:09:25.355296  130559 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:09:25.355306  130559 out.go:358] Setting ErrFile to fd 2...
	I0920 17:09:25.355310  130559 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:09:25.355566  130559 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8616/.minikube/bin
	I0920 17:09:25.355750  130559 out.go:352] Setting JSON to false
	I0920 17:09:25.355784  130559 mustload.go:65] Loading cluster: ha-846577
	I0920 17:09:25.355906  130559 notify.go:220] Checking for updates...
	I0920 17:09:25.356326  130559 config.go:182] Loaded profile config "ha-846577": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 17:09:25.356352  130559 status.go:174] checking status of ha-846577 ...
	I0920 17:09:25.356936  130559 cli_runner.go:164] Run: docker container inspect ha-846577 --format={{.State.Status}}
	I0920 17:09:25.376006  130559 status.go:364] ha-846577 host status = "Stopped" (err=<nil>)
	I0920 17:09:25.376048  130559 status.go:377] host is not running, skipping remaining checks
	I0920 17:09:25.376057  130559 status.go:176] ha-846577 status: &{Name:ha-846577 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 17:09:25.376092  130559 status.go:174] checking status of ha-846577-m02 ...
	I0920 17:09:25.376534  130559 cli_runner.go:164] Run: docker container inspect ha-846577-m02 --format={{.State.Status}}
	I0920 17:09:25.392893  130559 status.go:364] ha-846577-m02 host status = "Stopped" (err=<nil>)
	I0920 17:09:25.392928  130559 status.go:377] host is not running, skipping remaining checks
	I0920 17:09:25.392945  130559 status.go:176] ha-846577-m02 status: &{Name:ha-846577-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 17:09:25.392974  130559 status.go:174] checking status of ha-846577-m04 ...
	I0920 17:09:25.393236  130559 cli_runner.go:164] Run: docker container inspect ha-846577-m04 --format={{.State.Status}}
	I0920 17:09:25.409259  130559 status.go:364] ha-846577-m04 host status = "Stopped" (err=<nil>)
	I0920 17:09:25.409281  130559 status.go:377] host is not running, skipping remaining checks
	I0920 17:09:25.409288  130559 status.go:176] ha-846577-m04 status: &{Name:ha-846577-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (79.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-846577 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0920 17:10:28.135084   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/functional-796375/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-846577 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m18.509705622s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (79.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (31.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-846577 --control-plane -v=7 --alsologtostderr
E0920 17:10:55.836864   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/functional-796375/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-846577 --control-plane -v=7 --alsologtostderr: (30.552596652s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-846577 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (31.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (20.51s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-725928 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-725928 --driver=docker  --container-runtime=docker: (20.506787434s)
--- PASS: TestImageBuild/serial/Setup (20.51s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.59s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-725928
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-725928: (2.594010171s)
--- PASS: TestImageBuild/serial/NormalBuild (2.59s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.98s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-725928
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.98s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.91s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-725928
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.91s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.7s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-725928
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.70s)

                                                
                                    
x
+
TestJSONOutput/start/Command (72.08s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-291952 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
E0920 17:12:43.767138   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-291952 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (1m12.077736314s)
--- PASS: TestJSONOutput/start/Command (72.08s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.53s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-291952 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.53s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.41s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-291952 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.41s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.81s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-291952 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-291952 --output=json --user=testUser: (10.814449792s)
--- PASS: TestJSONOutput/stop/Command (10.81s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-407626 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-407626 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (66.823562ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"12c0c1b2-fb5b-4483-830e-a5f39f2ef5d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-407626] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"49836622-4bef-4e7b-8027-89f855f9ef10","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19672"}}
	{"specversion":"1.0","id":"4de822e1-f93a-4a3c-aae4-39837c496066","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"87e6269a-63b8-43c0-b931-15cb0376fc5f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19672-8616/kubeconfig"}}
	{"specversion":"1.0","id":"f7391db7-262f-4b0f-b725-b05a53ebdd95","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8616/.minikube"}}
	{"specversion":"1.0","id":"890b7a16-d60b-4a93-9455-7acda547a213","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"fd169b63-9ebb-432f-931d-3164b6905f4e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9437c2aa-287e-457a-b280-5648456bef9a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-407626" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-407626
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (26.81s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-640410 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-640410 --network=: (24.841936139s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-640410" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-640410
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-640410: (1.947501779s)
--- PASS: TestKicCustomNetwork/create_custom_network (26.81s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (23.9s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-844856 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-844856 --network=bridge: (22.042680584s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-844856" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-844856
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-844856: (1.837317249s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (23.90s)

                                                
                                    
x
+
TestKicExistingNetwork (25.79s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0920 17:14:08.893655   15398 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0920 17:14:08.910281   15398 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0920 17:14:08.910368   15398 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0920 17:14:08.910396   15398 cli_runner.go:164] Run: docker network inspect existing-network
W0920 17:14:08.927103   15398 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0920 17:14:08.927133   15398 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0920 17:14:08.927156   15398 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0920 17:14:08.927325   15398 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0920 17:14:08.944120   15398 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f7ce17a78e83 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:71:55:6a:f1} reservation:<nil>}
I0920 17:14:08.944624   15398 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b88400}
I0920 17:14:08.944655   15398 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0920 17:14:08.944700   15398 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0920 17:14:09.007674   15398 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-701102 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-701102 --network=existing-network: (23.80755239s)
helpers_test.go:175: Cleaning up "existing-network-701102" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-701102
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-701102: (1.831558966s)
I0920 17:14:34.664224   15398 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (25.79s)

                                                
                                    
x
+
TestKicCustomSubnet (26.56s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-141644 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-141644 --subnet=192.168.60.0/24: (24.548929505s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-141644 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-141644" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-141644
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-141644: (1.994011503s)
--- PASS: TestKicCustomSubnet (26.56s)

                                                
                                    
x
+
TestKicStaticIP (25.8s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-466724 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-466724 --static-ip=192.168.200.200: (23.678916918s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-466724 ip
helpers_test.go:175: Cleaning up "static-ip-466724" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-466724
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-466724: (2.002622187s)
--- PASS: TestKicStaticIP (25.80s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (50.18s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-403366 --driver=docker  --container-runtime=docker
E0920 17:15:28.134767   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/functional-796375/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-403366 --driver=docker  --container-runtime=docker: (23.550690681s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-415449 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-415449 --driver=docker  --container-runtime=docker: (21.494735976s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-403366
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-415449
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-415449" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-415449
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-415449: (1.988111219s)
helpers_test.go:175: Cleaning up "first-403366" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-403366
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-403366: (2.026963546s)
--- PASS: TestMinikubeProfile (50.18s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.06s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-938880 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-938880 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (9.064645877s)
--- PASS: TestMountStart/serial/StartWithMountFirst (10.06s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-938880 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (10.37s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-953501 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-953501 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (9.365347007s)
--- PASS: TestMountStart/serial/StartWithMountSecond (10.37s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-953501 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.46s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-938880 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-938880 --alsologtostderr -v=5: (1.456260882s)
--- PASS: TestMountStart/serial/DeleteFirst (1.46s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-953501 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.17s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-953501
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-953501: (1.173806851s)
--- PASS: TestMountStart/serial/Stop (1.17s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.67s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-953501
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-953501: (7.67072442s)
--- PASS: TestMountStart/serial/RestartStopped (8.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-953501 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (69.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-167331 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0920 17:17:43.764302   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-167331 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m9.288485489s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167331 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (69.76s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (39.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-167331 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-167331 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-167331 -- rollout status deployment/busybox: (2.981447188s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-167331 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0920 17:18:04.607273   15398 retry.go:31] will retry after 912.723941ms: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-167331 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0920 17:18:05.630265   15398 retry.go:31] will retry after 1.441159233s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-167331 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0920 17:18:07.180953   15398 retry.go:31] will retry after 2.175050064s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-167331 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0920 17:18:09.464714   15398 retry.go:31] will retry after 4.318242562s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-167331 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0920 17:18:13.896280   15398 retry.go:31] will retry after 4.815500392s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-167331 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0920 17:18:18.822078   15398 retry.go:31] will retry after 9.908658307s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-167331 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0920 17:18:28.844980   15398 retry.go:31] will retry after 10.30345272s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-167331 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-167331 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-167331 -- exec busybox-7dff88458-6vslw -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-167331 -- exec busybox-7dff88458-8gghx -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-167331 -- exec busybox-7dff88458-6vslw -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-167331 -- exec busybox-7dff88458-8gghx -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-167331 -- exec busybox-7dff88458-6vslw -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-167331 -- exec busybox-7dff88458-8gghx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (39.02s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-167331 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-167331 -- exec busybox-7dff88458-6vslw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-167331 -- exec busybox-7dff88458-6vslw -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-167331 -- exec busybox-7dff88458-8gghx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-167331 -- exec busybox-7dff88458-8gghx -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.72s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (18.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-167331 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-167331 -v 3 --alsologtostderr: (17.884521572s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167331 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (18.57s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-167331 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.66s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167331 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167331 cp testdata/cp-test.txt multinode-167331:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167331 ssh -n multinode-167331 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167331 cp multinode-167331:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3523871586/001/cp-test_multinode-167331.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167331 ssh -n multinode-167331 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167331 cp multinode-167331:/home/docker/cp-test.txt multinode-167331-m02:/home/docker/cp-test_multinode-167331_multinode-167331-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167331 ssh -n multinode-167331 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167331 ssh -n multinode-167331-m02 "sudo cat /home/docker/cp-test_multinode-167331_multinode-167331-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167331 cp multinode-167331:/home/docker/cp-test.txt multinode-167331-m03:/home/docker/cp-test_multinode-167331_multinode-167331-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167331 ssh -n multinode-167331 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167331 ssh -n multinode-167331-m03 "sudo cat /home/docker/cp-test_multinode-167331_multinode-167331-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167331 cp testdata/cp-test.txt multinode-167331-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167331 ssh -n multinode-167331-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167331 cp multinode-167331-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3523871586/001/cp-test_multinode-167331-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167331 ssh -n multinode-167331-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167331 cp multinode-167331-m02:/home/docker/cp-test.txt multinode-167331:/home/docker/cp-test_multinode-167331-m02_multinode-167331.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167331 ssh -n multinode-167331-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167331 ssh -n multinode-167331 "sudo cat /home/docker/cp-test_multinode-167331-m02_multinode-167331.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167331 cp multinode-167331-m02:/home/docker/cp-test.txt multinode-167331-m03:/home/docker/cp-test_multinode-167331-m02_multinode-167331-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167331 ssh -n multinode-167331-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167331 ssh -n multinode-167331-m03 "sudo cat /home/docker/cp-test_multinode-167331-m02_multinode-167331-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167331 cp testdata/cp-test.txt multinode-167331-m03:/home/docker/cp-test.txt
E0920 17:19:06.830922   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167331 ssh -n multinode-167331-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167331 cp multinode-167331-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3523871586/001/cp-test_multinode-167331-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167331 ssh -n multinode-167331-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167331 cp multinode-167331-m03:/home/docker/cp-test.txt multinode-167331:/home/docker/cp-test_multinode-167331-m03_multinode-167331.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167331 ssh -n multinode-167331-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167331 ssh -n multinode-167331 "sudo cat /home/docker/cp-test_multinode-167331-m03_multinode-167331.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167331 cp multinode-167331-m03:/home/docker/cp-test.txt multinode-167331-m02:/home/docker/cp-test_multinode-167331-m03_multinode-167331-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167331 ssh -n multinode-167331-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167331 ssh -n multinode-167331-m02 "sudo cat /home/docker/cp-test_multinode-167331-m03_multinode-167331-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.96s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167331 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-167331 node stop m03: (1.176584493s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167331 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-167331 status: exit status 7 (452.851479ms)

                                                
                                                
-- stdout --
	multinode-167331
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-167331-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-167331-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167331 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-167331 status --alsologtostderr: exit status 7 (461.188288ms)

                                                
                                                
-- stdout --
	multinode-167331
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-167331-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-167331-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 17:19:11.019154  218064 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:19:11.019302  218064 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:19:11.019313  218064 out.go:358] Setting ErrFile to fd 2...
	I0920 17:19:11.019319  218064 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:19:11.019585  218064 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8616/.minikube/bin
	I0920 17:19:11.019799  218064 out.go:352] Setting JSON to false
	I0920 17:19:11.019838  218064 mustload.go:65] Loading cluster: multinode-167331
	I0920 17:19:11.019886  218064 notify.go:220] Checking for updates...
	I0920 17:19:11.020391  218064 config.go:182] Loaded profile config "multinode-167331": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 17:19:11.020422  218064 status.go:174] checking status of multinode-167331 ...
	I0920 17:19:11.021069  218064 cli_runner.go:164] Run: docker container inspect multinode-167331 --format={{.State.Status}}
	I0920 17:19:11.040278  218064 status.go:364] multinode-167331 host status = "Running" (err=<nil>)
	I0920 17:19:11.040311  218064 host.go:66] Checking if "multinode-167331" exists ...
	I0920 17:19:11.040561  218064 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-167331
	I0920 17:19:11.059354  218064 host.go:66] Checking if "multinode-167331" exists ...
	I0920 17:19:11.059703  218064 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 17:19:11.059775  218064 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-167331
	I0920 17:19:11.078133  218064 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/multinode-167331/id_rsa Username:docker}
	I0920 17:19:11.167777  218064 ssh_runner.go:195] Run: systemctl --version
	I0920 17:19:11.171628  218064 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 17:19:11.181719  218064 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 17:19:11.228918  218064 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:62 SystemTime:2024-09-20 17:19:11.219778651 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0920 17:19:11.229474  218064 kubeconfig.go:125] found "multinode-167331" server: "https://192.168.67.2:8443"
	I0920 17:19:11.229510  218064 api_server.go:166] Checking apiserver status ...
	I0920 17:19:11.229541  218064 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 17:19:11.240298  218064 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2365/cgroup
	I0920 17:19:11.248848  218064 api_server.go:182] apiserver freezer: "4:freezer:/docker/2e8ecef35d23c242c41066eb533e618a2ef52565f1f55e278834c22f0b767031/kubepods/burstable/podbc6f386a45398bc358331a67ae1f640e/98d1783b6c0027cc96150c4a28d660cf684c6fa41e2981364e0b5ae5a7c7a1d2"
	I0920 17:19:11.248920  218064 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/2e8ecef35d23c242c41066eb533e618a2ef52565f1f55e278834c22f0b767031/kubepods/burstable/podbc6f386a45398bc358331a67ae1f640e/98d1783b6c0027cc96150c4a28d660cf684c6fa41e2981364e0b5ae5a7c7a1d2/freezer.state
	I0920 17:19:11.256616  218064 api_server.go:204] freezer state: "THAWED"
	I0920 17:19:11.256648  218064 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0920 17:19:11.260283  218064 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0920 17:19:11.260303  218064 status.go:456] multinode-167331 apiserver status = Running (err=<nil>)
	I0920 17:19:11.260312  218064 status.go:176] multinode-167331 status: &{Name:multinode-167331 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 17:19:11.260331  218064 status.go:174] checking status of multinode-167331-m02 ...
	I0920 17:19:11.260576  218064 cli_runner.go:164] Run: docker container inspect multinode-167331-m02 --format={{.State.Status}}
	I0920 17:19:11.277619  218064 status.go:364] multinode-167331-m02 host status = "Running" (err=<nil>)
	I0920 17:19:11.277640  218064 host.go:66] Checking if "multinode-167331-m02" exists ...
	I0920 17:19:11.277911  218064 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-167331-m02
	I0920 17:19:11.295589  218064 host.go:66] Checking if "multinode-167331-m02" exists ...
	I0920 17:19:11.295865  218064 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 17:19:11.295901  218064 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-167331-m02
	I0920 17:19:11.314594  218064 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/19672-8616/.minikube/machines/multinode-167331-m02/id_rsa Username:docker}
	I0920 17:19:11.408073  218064 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 17:19:11.419170  218064 status.go:176] multinode-167331-m02 status: &{Name:multinode-167331-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0920 17:19:11.419208  218064 status.go:174] checking status of multinode-167331-m03 ...
	I0920 17:19:11.419524  218064 cli_runner.go:164] Run: docker container inspect multinode-167331-m03 --format={{.State.Status}}
	I0920 17:19:11.437102  218064 status.go:364] multinode-167331-m03 host status = "Stopped" (err=<nil>)
	I0920 17:19:11.437130  218064 status.go:377] host is not running, skipping remaining checks
	I0920 17:19:11.437136  218064 status.go:176] multinode-167331-m03 status: &{Name:multinode-167331-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.09s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167331 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-167331 node start m03 -v=7 --alsologtostderr: (9.214681202s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167331 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.87s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (100.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-167331
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-167331
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-167331: (22.521913567s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-167331 --wait=true -v=8 --alsologtostderr
E0920 17:20:28.134567   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/functional-796375/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-167331 --wait=true -v=8 --alsologtostderr: (1m17.71059337s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-167331
--- PASS: TestMultiNode/serial/RestartKeepsNodes (100.32s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167331 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-167331 node delete m03: (4.648148294s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167331 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.22s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167331 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-167331 stop: (21.26986622s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167331 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-167331 status: exit status 7 (77.712727ms)

                                                
                                                
-- stdout --
	multinode-167331
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-167331-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167331 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-167331 status --alsologtostderr: exit status 7 (78.706917ms)

                                                
                                                
-- stdout --
	multinode-167331
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-167331-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 17:21:28.237523  233581 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:21:28.237622  233581 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:21:28.237627  233581 out.go:358] Setting ErrFile to fd 2...
	I0920 17:21:28.237632  233581 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:21:28.237823  233581 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8616/.minikube/bin
	I0920 17:21:28.237977  233581 out.go:352] Setting JSON to false
	I0920 17:21:28.238008  233581 mustload.go:65] Loading cluster: multinode-167331
	I0920 17:21:28.238103  233581 notify.go:220] Checking for updates...
	I0920 17:21:28.238400  233581 config.go:182] Loaded profile config "multinode-167331": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 17:21:28.238418  233581 status.go:174] checking status of multinode-167331 ...
	I0920 17:21:28.238869  233581 cli_runner.go:164] Run: docker container inspect multinode-167331 --format={{.State.Status}}
	I0920 17:21:28.257167  233581 status.go:364] multinode-167331 host status = "Stopped" (err=<nil>)
	I0920 17:21:28.257206  233581 status.go:377] host is not running, skipping remaining checks
	I0920 17:21:28.257220  233581 status.go:176] multinode-167331 status: &{Name:multinode-167331 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 17:21:28.257273  233581 status.go:174] checking status of multinode-167331-m02 ...
	I0920 17:21:28.257558  233581 cli_runner.go:164] Run: docker container inspect multinode-167331-m02 --format={{.State.Status}}
	I0920 17:21:28.274175  233581 status.go:364] multinode-167331-m02 host status = "Stopped" (err=<nil>)
	I0920 17:21:28.274206  233581 status.go:377] host is not running, skipping remaining checks
	I0920 17:21:28.274214  233581 status.go:176] multinode-167331-m02 status: &{Name:multinode-167331-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.43s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (55.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-167331 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0920 17:21:51.199083   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/functional-796375/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-167331 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (55.129526348s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-167331 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (55.68s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (26.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-167331
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-167331-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-167331-m02 --driver=docker  --container-runtime=docker: exit status 14 (60.27964ms)

                                                
                                                
-- stdout --
	* [multinode-167331-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19672
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19672-8616/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8616/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-167331-m02' is duplicated with machine name 'multinode-167331-m02' in profile 'multinode-167331'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-167331-m03 --driver=docker  --container-runtime=docker
E0920 17:22:43.764248   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-167331-m03 --driver=docker  --container-runtime=docker: (24.177620205s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-167331
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-167331: exit status 80 (266.24455ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-167331 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-167331-m03 already exists in multinode-167331-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-167331-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-167331-m03: (1.956199084s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (26.50s)

                                                
                                    
x
+
TestPreload (148.06s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-591954 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-591954 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (1m32.030165899s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-591954 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-591954 image pull gcr.io/k8s-minikube/busybox: (2.151388217s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-591954
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-591954: (10.733297513s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-591954 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-591954 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (40.763447237s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-591954 image list
helpers_test.go:175: Cleaning up "test-preload-591954" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-591954
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-591954: (2.1781056s)
--- PASS: TestPreload (148.06s)

                                                
                                    
x
+
TestScheduledStopUnix (94.31s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-179680 --memory=2048 --driver=docker  --container-runtime=docker
E0920 17:25:28.135114   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/functional-796375/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-179680 --memory=2048 --driver=docker  --container-runtime=docker: (21.440698023s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-179680 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-179680 -n scheduled-stop-179680
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-179680 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0920 17:25:44.102422   15398 retry.go:31] will retry after 137.786µs: open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/scheduled-stop-179680/pid: no such file or directory
I0920 17:25:44.103550   15398 retry.go:31] will retry after 97.148µs: open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/scheduled-stop-179680/pid: no such file or directory
I0920 17:25:44.104760   15398 retry.go:31] will retry after 226.407µs: open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/scheduled-stop-179680/pid: no such file or directory
I0920 17:25:44.105893   15398 retry.go:31] will retry after 295.332µs: open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/scheduled-stop-179680/pid: no such file or directory
I0920 17:25:44.107055   15398 retry.go:31] will retry after 552.233µs: open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/scheduled-stop-179680/pid: no such file or directory
I0920 17:25:44.108184   15398 retry.go:31] will retry after 1.061148ms: open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/scheduled-stop-179680/pid: no such file or directory
I0920 17:25:44.109314   15398 retry.go:31] will retry after 1.632686ms: open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/scheduled-stop-179680/pid: no such file or directory
I0920 17:25:44.111557   15398 retry.go:31] will retry after 2.283642ms: open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/scheduled-stop-179680/pid: no such file or directory
I0920 17:25:44.114794   15398 retry.go:31] will retry after 3.835998ms: open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/scheduled-stop-179680/pid: no such file or directory
I0920 17:25:44.119046   15398 retry.go:31] will retry after 2.446845ms: open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/scheduled-stop-179680/pid: no such file or directory
I0920 17:25:44.122325   15398 retry.go:31] will retry after 3.049042ms: open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/scheduled-stop-179680/pid: no such file or directory
I0920 17:25:44.125488   15398 retry.go:31] will retry after 8.009897ms: open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/scheduled-stop-179680/pid: no such file or directory
I0920 17:25:44.133756   15398 retry.go:31] will retry after 14.660269ms: open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/scheduled-stop-179680/pid: no such file or directory
I0920 17:25:44.149040   15398 retry.go:31] will retry after 17.104594ms: open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/scheduled-stop-179680/pid: no such file or directory
I0920 17:25:44.167331   15398 retry.go:31] will retry after 20.72399ms: open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/scheduled-stop-179680/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-179680 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-179680 -n scheduled-stop-179680
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-179680
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-179680 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-179680
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-179680: exit status 7 (63.322977ms)

                                                
                                                
-- stdout --
	scheduled-stop-179680
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-179680 -n scheduled-stop-179680
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-179680 -n scheduled-stop-179680: exit status 7 (61.955172ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-179680" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-179680
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-179680: (1.623856568s)
--- PASS: TestScheduledStopUnix (94.31s)

                                                
                                    
x
+
TestSkaffold (107.75s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe3431738799 version
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-342972 --memory=2600 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-342972 --memory=2600 --driver=docker  --container-runtime=docker: (24.652743494s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/Docker_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe3431738799 run --minikube-profile skaffold-342972 --kube-context skaffold-342972 --status-check=true --port-forward=false --interactive=false
E0920 17:27:43.764226   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe3431738799 run --minikube-profile skaffold-342972 --kube-context skaffold-342972 --status-check=true --port-forward=false --interactive=false: (1m5.980134904s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-76d845d55-sthhn" [641eaabd-4c03-40df-81a3-69a3df204e6d] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.002907573s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-6fd87b98b4-vv8l8" [a6c9f5c5-0b05-4ad8-b368-c59467367f16] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003611884s
helpers_test.go:175: Cleaning up "skaffold-342972" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-342972
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-342972: (2.716028392s)
--- PASS: TestSkaffold (107.75s)

                                                
                                    
x
+
TestInsufficientStorage (12.64s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-653955 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-653955 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (10.498654893s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0b930120-0088-4bac-88d2-f9fb2801e2a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-653955] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"69efee58-87d6-462b-8dff-c936fd3c08e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19672"}}
	{"specversion":"1.0","id":"217a5066-3de4-4171-a881-629309ced7f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ed556984-f98b-4943-a640-01fa7c2943ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19672-8616/kubeconfig"}}
	{"specversion":"1.0","id":"63d2cd14-e208-4278-acfc-64a190111e97","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8616/.minikube"}}
	{"specversion":"1.0","id":"9a220bf6-4ee1-4626-a42a-f010e2d5154e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"5a88d00d-a309-431d-9f1a-8e2caaa79e2f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6c21c7d2-2ae7-4bfe-b053-3ff37f6c1884","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"8b30ede8-3dfe-4c62-93aa-92d970d5927f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"e5741069-1db7-4c80-b1b1-c30096991bf0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"585be416-45e6-4129-a9eb-f98c61c9c1e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"8c89daeb-b81f-41cb-b4a5-9fa11e8c65d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-653955\" primary control-plane node in \"insufficient-storage-653955\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"837f9787-2143-4a38-9ba5-acac51714ca2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1726784731-19672 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"3199600b-2aa5-4322-92c8-60d7678a0c36","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"69912e3d-fc8e-4d39-9a17-f579fb80531c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-653955 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-653955 --output=json --layout=cluster: exit status 7 (256.141329ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-653955","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-653955","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 17:28:55.082853  274189 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-653955" does not appear in /home/jenkins/minikube-integration/19672-8616/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-653955 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-653955 --output=json --layout=cluster: exit status 7 (247.559495ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-653955","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-653955","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 17:28:55.331847  274288 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-653955" does not appear in /home/jenkins/minikube-integration/19672-8616/kubeconfig
	E0920 17:28:55.341207  274288 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/insufficient-storage-653955/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-653955" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-653955
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-653955: (1.634447601s)
--- PASS: TestInsufficientStorage (12.64s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (80.95s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.46598592 start -p running-upgrade-651108 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.46598592 start -p running-upgrade-651108 --memory=2200 --vm-driver=docker  --container-runtime=docker: (35.744071004s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-651108 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-651108 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (39.730844749s)
helpers_test.go:175: Cleaning up "running-upgrade-651108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-651108
E0920 17:33:31.898570   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/skaffold-342972/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:33:33.180236   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/skaffold-342972/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-651108: (3.051866419s)
--- PASS: TestRunningBinaryUpgrade (80.95s)

                                                
                                    
x
+
TestMissingContainerUpgrade (153.36s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.1824305369 start -p missing-upgrade-153099 --memory=2200 --driver=docker  --container-runtime=docker
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.1824305369 start -p missing-upgrade-153099 --memory=2200 --driver=docker  --container-runtime=docker: (1m30.521636325s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-153099
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-153099: (10.356243017s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-153099
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-153099 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-153099 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (47.792839806s)
helpers_test.go:175: Cleaning up "missing-upgrade-153099" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-153099
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-153099: (2.185257937s)
--- PASS: TestMissingContainerUpgrade (153.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-495257 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-495257 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (82.453828ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-495257] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19672
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19672-8616/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8616/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (34.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-495257 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-495257 --driver=docker  --container-runtime=docker: (34.23714177s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-495257 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (34.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-495257 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-495257 --no-kubernetes --driver=docker  --container-runtime=docker: (15.613631136s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-495257 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-495257 status -o json: exit status 2 (290.134303ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-495257","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-495257
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-495257: (1.779163997s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.68s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.42s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.42s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (144.3s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2802675639 start -p stopped-upgrade-887351 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2802675639 start -p stopped-upgrade-887351 --memory=2200 --vm-driver=docker  --container-runtime=docker: (1m48.976068958s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2802675639 -p stopped-upgrade-887351 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2802675639 -p stopped-upgrade-887351 stop: (10.734702s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-887351 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-887351 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (24.590737608s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (144.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-495257 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-495257 --no-kubernetes --driver=docker  --container-runtime=docker: (7.616895571s)
--- PASS: TestNoKubernetes/serial/Start (7.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-495257 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-495257 "sudo systemctl is-active --quiet service kubelet": exit status 1 (320.792192ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (6.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (5.757413641s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (6.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-495257
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-495257: (1.196120562s)
--- PASS: TestNoKubernetes/serial/Stop (1.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-495257 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-495257 --driver=docker  --container-runtime=docker: (8.170226544s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-495257 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-495257 "sudo systemctl is-active --quiet service kubelet": exit status 1 (239.227738ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.12s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-887351
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-887351: (1.1208208s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.12s)

                                                
                                    
x
+
TestPause/serial/Start (70.08s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-232054 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
E0920 17:32:43.764223   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-232054 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m10.080255129s)
--- PASS: TestPause/serial/Start (70.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (68.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-444657 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
E0920 17:33:35.742575   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/skaffold-342972/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:33:40.864185   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/skaffold-342972/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-444657 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m8.678641641s)
--- PASS: TestNetworkPlugins/group/auto/Start (68.68s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (33.07s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-232054 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0920 17:33:51.106441   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/skaffold-342972/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-232054 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (33.052835551s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (33.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (33.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-444657 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
E0920 17:34:11.588207   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/skaffold-342972/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-444657 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (33.180745498s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (33.18s)

                                                
                                    
x
+
TestPause/serial/Pause (0.52s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-232054 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.52s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.29s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-232054 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-232054 --output=json --layout=cluster: exit status 2 (291.634756ms)

                                                
                                                
-- stdout --
	{"Name":"pause-232054","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-232054","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.29s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.48s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-232054 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.48s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.59s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-232054 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.59s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.23s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-232054 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-232054 --alsologtostderr -v=5: (2.226743767s)
--- PASS: TestPause/serial/DeletePaused (2.23s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (14.63s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (14.573729031s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-232054
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-232054: exit status 1 (18.107625ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-232054: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (14.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-444657 "pgrep -a kubelet"
I0920 17:34:29.489303   15398 config.go:182] Loaded profile config "custom-flannel-444657": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-444657 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-zd4rk" [df22f6a2-1c8d-4542-a4a3-399e61a052b7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-zd4rk" [df22f6a2-1c8d-4542-a4a3-399e61a052b7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.003877609s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (68.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-444657 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-444657 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m8.85260642s)
--- PASS: TestNetworkPlugins/group/false/Start (68.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (24.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-444657 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context custom-flannel-444657 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.147038915s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0920 17:34:55.863331   15398 retry.go:31] will retry after 526.814201ms: exit status 1
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-444657 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context custom-flannel-444657 exec deployment/netcat -- nslookup kubernetes.default: exit status 137 (8.036293951s)

                                                
                                                
** stderr ** 
	command terminated with exit code 137

                                                
                                                
** /stderr **
I0920 17:35:04.427138   15398 retry.go:31] will retry after 893.298877ms: exit status 137
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-444657 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (24.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-444657 "pgrep -a kubelet"
I0920 17:34:43.981607   15398 config.go:182] Loaded profile config "auto-444657": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-444657 replace --force -f testdata/netcat-deployment.yaml
I0920 17:34:44.454739   15398 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-n2664" [2818f63a-ede9-4ba5-b234-d665e7d0e78b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-n2664" [2818f63a-ede9-4ba5-b234-d665e7d0e78b] Running
E0920 17:34:52.550257   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/skaffold-342972/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004776787s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-444657 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-444657 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-444657 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-444657 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (6.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-444657 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:264: (dbg) Non-zero exit: kubectl --context custom-flannel-444657 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.124003074s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0920 17:35:10.746207   15398 retry.go:31] will retry after 1.345585795s: exit status 1
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-444657 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (6.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (34.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-444657 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
E0920 17:35:28.134412   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/functional-796375/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-444657 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (34.539813764s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (34.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (62.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-444657 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-444657 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m2.059238771s)
--- PASS: TestNetworkPlugins/group/calico/Start (62.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-4zc2c" [b1da0e5c-b279-4202-ba0f-803867ddc411] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004109692s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-444657 "pgrep -a kubelet"
I0920 17:35:47.534955   15398 config.go:182] Loaded profile config "false-444657": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-444657 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-68vv5" [89e3a394-6042-4b31-9ed4-21d1616e1281] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-68vv5" [89e3a394-6042-4b31-9ed4-21d1616e1281] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 10.004074311s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-444657 "pgrep -a kubelet"
I0920 17:35:53.356653   15398 config.go:182] Loaded profile config "kindnet-444657": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-444657 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-lw5wp" [8b29672d-4047-4e24-8a23-74e1dd58ad8b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-lw5wp" [8b29672d-4047-4e24-8a23-74e1dd58ad8b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004169202s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (50.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-444657 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-444657 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (50.265322527s)
--- PASS: TestNetworkPlugins/group/flannel/Start (50.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-444657 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-444657 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-444657 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (26.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-444657 exec deployment/netcat -- nslookup kubernetes.default
E0920 17:36:14.472596   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/skaffold-342972/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:175: (dbg) Non-zero exit: kubectl --context kindnet-444657 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.211030419s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0920 17:36:17.836080   15398 retry.go:31] will retry after 1.42728198s: exit status 1
net_test.go:175: (dbg) Run:  kubectl --context kindnet-444657 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context kindnet-444657 exec deployment/netcat -- nslookup kubernetes.default: exit status 137 (8.17160178s)

                                                
                                                
** stderr ** 
	command terminated with exit code 137

                                                
                                                
** /stderr **
I0920 17:36:27.435912   15398 retry.go:31] will retry after 1.95065641s: exit status 137
net_test.go:175: (dbg) Run:  kubectl --context kindnet-444657 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (26.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (36.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-444657 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-444657 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (36.242834116s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (36.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-444657 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-444657 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-96h98" [77eb2520-60c2-4731-a742-3b65869f4fd8] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004774594s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-444657 "pgrep -a kubelet"
I0920 17:36:39.933249   15398 config.go:182] Loaded profile config "calico-444657": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-444657 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-sdzg6" [39b0da14-4e20-4b6b-841f-bb944a0234a1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-sdzg6" [39b0da14-4e20-4b6b-841f-bb944a0234a1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.003651194s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-9vls9" [ad5be479-c090-431f-9a3b-7d13d6352220] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003908141s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-444657 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-444657 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-444657 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (66.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-444657 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-444657 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m6.988044379s)
--- PASS: TestNetworkPlugins/group/bridge/Start (66.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-444657 "pgrep -a kubelet"
I0920 17:36:54.028003   15398 config.go:182] Loaded profile config "flannel-444657": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-444657 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-n62rg" [d914799c-3c90-406d-8b0b-d334d420d24a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-n62rg" [d914799c-3c90-406d-8b0b-d334d420d24a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.005197984s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-444657 "pgrep -a kubelet"
I0920 17:36:55.937261   15398 config.go:182] Loaded profile config "enable-default-cni-444657": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-444657 replace --force -f testdata/netcat-deployment.yaml
I0920 17:36:56.232161   15398 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-25wvz" [8f500a85-a9d9-4f7e-b14d-85f9290f114a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-25wvz" [8f500a85-a9d9-4f7e-b14d-85f9290f114a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.005503122s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-444657 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-444657 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-444657 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-444657 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-444657 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-444657 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (44.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-444657 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-444657 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (44.697752896s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (44.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (132.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-090384 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-090384 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m12.630034011s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (132.63s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (71.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-872979 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0920 17:37:43.763737   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-872979 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (1m11.764045303s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (71.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-444657 "pgrep -a kubelet"
I0920 17:37:55.742184   15398 config.go:182] Loaded profile config "kubenet-444657": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-444657 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6jpnw" [c92ad2fd-6212-4ae9-9d9f-2ed58543f062] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-6jpnw" [c92ad2fd-6212-4ae9-9d9f-2ed58543f062] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.004200732s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-444657 "pgrep -a kubelet"
I0920 17:37:58.089661   15398 config.go:182] Loaded profile config "bridge-444657": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-444657 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-8fpk4" [61a46e76-a1d8-44a5-897b-f22c32f3f518] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-8fpk4" [61a46e76-a1d8-44a5-897b-f22c32f3f518] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004482214s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-444657 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-444657 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-444657 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-444657 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-444657 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-444657 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (43.59s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-374906 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-374906 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (43.593413632s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (43.59s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (69.55s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-751328 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0920 17:38:30.605515   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/skaffold-342972/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:38:31.200550   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/functional-796375/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-751328 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (1m9.545100412s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (69.55s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-872979 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d5ca6764-02f0-40f0-b612-491f729d472f] Pending
helpers_test.go:344: "busybox" [d5ca6764-02f0-40f0-b612-491f729d472f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d5ca6764-02f0-40f0-b612-491f729d472f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.005724654s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-872979 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-872979 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-872979 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.003461123s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-872979 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-872979 --alsologtostderr -v=3
E0920 17:38:58.314376   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/skaffold-342972/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-872979 --alsologtostderr -v=3: (10.954056079s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-872979 -n embed-certs-872979
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-872979 -n embed-certs-872979: exit status 7 (146.200428ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-872979 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (262.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-872979 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-872979 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m22.605771835s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-872979 -n embed-certs-872979
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (262.96s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-374906 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8337b206-86e2-4c7e-ab4a-4281aee99a7c] Pending
helpers_test.go:344: "busybox" [8337b206-86e2-4c7e-ab4a-4281aee99a7c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8337b206-86e2-4c7e-ab4a-4281aee99a7c] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003050341s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-374906 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-374906 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-374906 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.69s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-374906 --alsologtostderr -v=3
E0920 17:39:29.704448   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/custom-flannel-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:39:29.710893   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/custom-flannel-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:39:29.723084   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/custom-flannel-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:39:29.744554   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/custom-flannel-444657/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-374906 --alsologtostderr -v=3: (10.693822516s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.69s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-374906 -n no-preload-374906
E0920 17:39:29.786157   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/custom-flannel-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:39:29.868244   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/custom-flannel-444657/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-374906 -n no-preload-374906: exit status 7 (139.788377ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-374906 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (262.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-374906 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0920 17:39:30.029742   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/custom-flannel-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:39:30.351908   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/custom-flannel-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:39:30.993734   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/custom-flannel-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:39:32.276117   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/custom-flannel-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:39:34.837718   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/custom-flannel-444657/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-374906 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m22.26139271s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-374906 -n no-preload-374906
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (262.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-751328 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0f642e63-0bc4-4471-8e62-92ce64485042] Pending
helpers_test.go:344: "busybox" [0f642e63-0bc4-4471-8e62-92ce64485042] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0920 17:39:39.959400   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/custom-flannel-444657/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [0f642e63-0bc4-4471-8e62-92ce64485042] Running
E0920 17:39:44.447918   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/auto-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:39:44.454276   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/auto-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:39:44.465688   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/auto-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:39:44.487162   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/auto-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:39:44.528500   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/auto-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:39:44.609940   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/auto-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:39:44.771478   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/auto-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:39:45.092726   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/auto-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:39:45.734649   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/auto-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:39:47.016002   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/auto-444657/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004522684s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-751328 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-090384 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f527975a-7237-4cfd-8aa1-643c4a1cba28] Pending
helpers_test.go:344: "busybox" [f527975a-7237-4cfd-8aa1-643c4a1cba28] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f527975a-7237-4cfd-8aa1-643c4a1cba28] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004275005s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-090384 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-751328 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-751328 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-090384 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-090384 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.18387544s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-090384 describe deploy/metrics-server -n kube-system
E0920 17:39:49.577336   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/auto-444657/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-751328 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-751328 --alsologtostderr -v=3: (10.82420278s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (10.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-090384 --alsologtostderr -v=3
E0920 17:39:50.201736   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/custom-flannel-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:39:54.699578   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/auto-444657/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-090384 --alsologtostderr -v=3: (10.769626986s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (10.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-751328 -n default-k8s-diff-port-751328
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-751328 -n default-k8s-diff-port-751328: exit status 7 (61.856794ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-751328 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (266.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-751328 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-751328 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m25.884550068s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-751328 -n default-k8s-diff-port-751328
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (266.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-090384 -n old-k8s-version-090384
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-090384 -n old-k8s-version-090384: exit status 7 (66.152484ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-090384 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (137.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-090384 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0920 17:40:04.941762   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/auto-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:40:10.683262   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/custom-flannel-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:40:25.423105   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/auto-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:40:28.134546   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/functional-796375/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:40:47.033553   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/kindnet-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:40:47.039955   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/kindnet-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:40:47.051393   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/kindnet-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:40:47.072796   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/kindnet-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:40:47.114161   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/kindnet-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:40:47.195642   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/kindnet-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:40:47.357130   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/kindnet-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:40:47.678773   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/kindnet-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:40:47.759254   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/false-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:40:47.765623   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/false-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:40:47.777006   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/false-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:40:47.798375   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/false-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:40:47.839845   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/false-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:40:47.921303   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/false-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:40:48.082843   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/false-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:40:48.320364   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/kindnet-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:40:48.404875   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/false-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:40:49.047168   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/false-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:40:49.602356   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/kindnet-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:40:50.329014   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/false-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:40:51.645148   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/custom-flannel-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:40:52.163932   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/kindnet-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:40:52.890650   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/false-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:40:57.285956   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/kindnet-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:40:58.012703   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/false-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:41:06.384913   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/auto-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:41:07.528250   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/kindnet-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:41:08.254598   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/false-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:41:28.009537   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/kindnet-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:41:28.735962   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/false-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:41:33.533668   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/calico-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:41:33.540077   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/calico-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:41:33.551468   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/calico-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:41:33.572903   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/calico-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:41:33.614281   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/calico-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:41:33.695708   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/calico-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:41:33.857908   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/calico-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:41:34.179346   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/calico-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:41:34.821540   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/calico-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:41:36.103418   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/calico-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:41:38.665477   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/calico-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:41:43.786785   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/calico-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:41:47.747324   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/flannel-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:41:47.753686   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/flannel-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:41:47.765280   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/flannel-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:41:47.786639   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/flannel-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:41:47.828080   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/flannel-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:41:47.909534   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/flannel-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:41:48.071077   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/flannel-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:41:48.392916   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/flannel-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:41:49.035139   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/flannel-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:41:50.317318   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/flannel-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:41:52.879056   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/flannel-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:41:54.028911   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/calico-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:41:56.222557   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/enable-default-cni-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:41:56.229141   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/enable-default-cni-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:41:56.240528   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/enable-default-cni-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:41:56.261964   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/enable-default-cni-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:41:56.303350   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/enable-default-cni-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:41:56.385239   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/enable-default-cni-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:41:56.546810   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/enable-default-cni-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:41:56.868742   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/enable-default-cni-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:41:57.510695   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/enable-default-cni-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:41:58.001334   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/flannel-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:41:58.792834   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/enable-default-cni-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:42:01.354188   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/enable-default-cni-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:42:06.476203   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/enable-default-cni-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:42:08.243272   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/flannel-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:42:08.971828   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/kindnet-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:42:09.698251   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/false-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:42:13.566475   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/custom-flannel-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:42:14.511106   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/calico-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:42:16.717654   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/enable-default-cni-444657/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-090384 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m17.391708285s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-090384 -n old-k8s-version-090384
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (137.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-hb522" [bfa36eb1-18e1-44e5-bc49-c9f921a308f8] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003946085s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-hb522" [bfa36eb1-18e1-44e5-bc49-c9f921a308f8] Running
E0920 17:42:28.306260   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/auto-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:42:28.724828   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/flannel-444657/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003795487s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-090384 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-090384 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-090384 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-090384 -n old-k8s-version-090384
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-090384 -n old-k8s-version-090384: exit status 2 (280.280687ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-090384 -n old-k8s-version-090384
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-090384 -n old-k8s-version-090384: exit status 2 (281.790964ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-090384 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-090384 -n old-k8s-version-090384
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-090384 -n old-k8s-version-090384
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (29.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-483324 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0920 17:42:37.199500   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/enable-default-cni-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:42:43.763542   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/addons-205029/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:42:55.473161   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/calico-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:42:55.936035   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/kubenet-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:42:55.942406   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/kubenet-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:42:55.953846   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/kubenet-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:42:55.975303   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/kubenet-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:42:56.016676   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/kubenet-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:42:56.098156   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/kubenet-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:42:56.259906   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/kubenet-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:42:56.581353   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/kubenet-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:42:57.223607   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/kubenet-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:42:58.305989   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/bridge-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:42:58.312425   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/bridge-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:42:58.323848   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/bridge-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:42:58.345236   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/bridge-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:42:58.386625   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/bridge-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:42:58.468124   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/bridge-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:42:58.505517   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/kubenet-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:42:58.629845   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/bridge-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:42:58.951891   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/bridge-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:42:59.593142   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/bridge-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:43:00.874473   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/bridge-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:43:01.066870   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/kubenet-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:43:03.435876   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/bridge-444657/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-483324 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (29.333509156s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (29.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-483324 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-483324 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.157260441s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.79s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-483324 --alsologtostderr -v=3
E0920 17:43:06.188543   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/kubenet-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:43:08.558131   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/bridge-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:43:09.686766   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/flannel-444657/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-483324 --alsologtostderr -v=3: (10.785118442s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.79s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-483324 -n newest-cni-483324
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-483324 -n newest-cni-483324: exit status 7 (66.328849ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-483324 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (15.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-483324 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0920 17:43:16.430704   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/kubenet-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:43:18.161034   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/enable-default-cni-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:43:18.800233   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/bridge-444657/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-483324 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (14.866679086s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-483324 -n newest-cni-483324
E0920 17:43:30.893733   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/kindnet-444657/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (15.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-4bbgl" [bd83d570-2d92-43ec-80b9-b02b5c5a97c0] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003240895s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-4bbgl" [bd83d570-2d92-43ec-80b9-b02b5c5a97c0] Running
E0920 17:43:30.605585   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/skaffold-342972/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004330021s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-872979 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-483324 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-483324 --alsologtostderr -v=1
E0920 17:43:31.620348   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/false-444657/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-483324 -n newest-cni-483324
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-483324 -n newest-cni-483324: exit status 2 (294.741237ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-483324 -n newest-cni-483324
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-483324 -n newest-cni-483324: exit status 2 (286.131215ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-483324 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-483324 -n newest-cni-483324
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-483324 -n newest-cni-483324
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-872979 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-872979 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-872979 -n embed-certs-872979
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-872979 -n embed-certs-872979: exit status 2 (279.427867ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-872979 -n embed-certs-872979
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-872979 -n embed-certs-872979: exit status 2 (280.636216ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-872979 --alsologtostderr -v=1
E0920 17:43:36.912375   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/kubenet-444657/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-872979 -n embed-certs-872979
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-872979 -n embed-certs-872979
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-tl2dq" [e0d08e6a-4470-4281-977b-abc9e0aaaa77] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004295036s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-tl2dq" [e0d08e6a-4470-4281-977b-abc9e0aaaa77] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004159964s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-374906 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-374906 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-374906 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-374906 -n no-preload-374906
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-374906 -n no-preload-374906: exit status 2 (283.199101ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-374906 -n no-preload-374906
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-374906 -n no-preload-374906: exit status 2 (284.324069ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-374906 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-374906 -n no-preload-374906
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-374906 -n no-preload-374906
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-dz8cb" [3a14d75e-5b9e-4168-b225-2256839f5da8] Running
E0920 17:44:29.590675   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/no-preload-374906/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:44:29.705261   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/custom-flannel-444657/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:44:31.608502   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/flannel-444657/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004090293s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-dz8cb" [3a14d75e-5b9e-4168-b225-2256839f5da8] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004672703s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-751328 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-751328 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-751328 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-751328 -n default-k8s-diff-port-751328
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-751328 -n default-k8s-diff-port-751328: exit status 2 (277.526026ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-751328 -n default-k8s-diff-port-751328
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-751328 -n default-k8s-diff-port-751328: exit status 2 (291.681493ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-751328 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-751328 -n default-k8s-diff-port-751328
E0920 17:44:39.162742   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/old-k8s-version-090384/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:44:39.169146   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/old-k8s-version-090384/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:44:39.180576   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/old-k8s-version-090384/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:44:39.202003   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/old-k8s-version-090384/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:44:39.243430   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/old-k8s-version-090384/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:44:39.324824   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/old-k8s-version-090384/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-751328 -n default-k8s-diff-port-751328
E0920 17:44:39.486470   15398 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/old-k8s-version-090384/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.32s)

                                                
                                    

Test skip (20/342)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-444657 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-444657

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-444657

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-444657

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-444657

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-444657

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-444657

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-444657

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-444657

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-444657

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-444657

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-444657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444657"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-444657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444657"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-444657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444657"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-444657

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-444657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444657"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-444657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444657"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-444657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-444657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-444657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-444657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-444657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-444657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-444657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-444657" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-444657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444657"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-444657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444657"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-444657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444657"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-444657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444657"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-444657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444657"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-444657

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-444657

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-444657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-444657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-444657

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-444657

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-444657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-444657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-444657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-444657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-444657" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-444657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444657"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-444657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444657"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-444657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444657"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-444657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444657"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-444657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444657"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19672-8616/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 20 Sep 2024 17:29:30 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.103.2:8443
name: NoKubernetes-495257
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19672-8616/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 20 Sep 2024 17:29:34 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.94.2:8443
name: offline-docker-451121
contexts:
- context:
cluster: NoKubernetes-495257
extensions:
- extension:
last-update: Fri, 20 Sep 2024 17:29:30 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: NoKubernetes-495257
name: NoKubernetes-495257
- context:
cluster: offline-docker-451121
extensions:
- extension:
last-update: Fri, 20 Sep 2024 17:29:34 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: offline-docker-451121
name: offline-docker-451121
current-context: offline-docker-451121
kind: Config
preferences: {}
users:
- name: NoKubernetes-495257
user:
client-certificate: /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/NoKubernetes-495257/client.crt
client-key: /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/NoKubernetes-495257/client.key
- name: offline-docker-451121
user:
client-certificate: /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/offline-docker-451121/client.crt
client-key: /home/jenkins/minikube-integration/19672-8616/.minikube/profiles/offline-docker-451121/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-444657

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-444657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444657"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-444657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444657"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-444657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444657"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-444657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444657"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-444657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444657"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-444657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444657"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-444657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444657"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-444657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444657"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-444657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444657"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-444657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444657"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-444657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444657"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-444657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444657"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-444657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444657"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-444657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444657"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-444657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444657"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-444657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444657"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-444657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444657"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-444657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444657"

                                                
                                                
----------------------- debugLogs end: cilium-444657 [took: 3.632706019s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-444657" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-444657
I0920 17:29:36.799574   15398 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1414948815/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640] Decompressors:map[bz2:0xc000813b30 gz:0xc000813b38 tar:0xc000813ae0 tar.bz2:0xc000813af0 tar.gz:0xc000813b00 tar.xz:0xc000813b10 tar.zst:0xc000813b20 tbz2:0xc000813af0 tgz:0xc000813b00 txz:0xc000813b10 tzst:0xc000813b20 xz:0xc000813b40 zip:0xc000813b50 zst:0xc000813b48] Getters:map[file:0xc0019392c0 http:0xc000569c20 https:0xc000569c70] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0920 17:29:36.799632   15398 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1414948815/002/docker-machine-driver-kvm2
--- SKIP: TestNetworkPlugins/group/cilium (3.77s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-353179" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-353179
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard