Test Report: Docker_Linux_containerd 18007

                    
                      fc27285b44a3684906f383c28cb886ae15cd7524:2024-01-31:32829
                    
                

Test fail (8/320)

x
+
TestAddons/parallel/NvidiaDevicePlugin (7.54s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-lmr4m" [3c951f22-d962-4f13-929a-e7a2552f629c] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004819962s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-214491
addons_test.go:955: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-214491: exit status 11 (280.175545ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-01-31T14:12:28Z" level=error msg="stat /run/containerd/runc/k8s.io/3ff841a74152d483fd0dc29276ab3069cdd544cc0daaab6dfe5c219480f3e9aa: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:956: failed to disable nvidia-device-plugin: args "out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-214491" : exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/NvidiaDevicePlugin]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-214491
helpers_test.go:235: (dbg) docker inspect addons-214491:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8737208cfcdc52b84ee9a1f2f8218dd282708736b67fe58ea1670ece8d1dd998",
	        "Created": "2024-01-31T14:10:42.09342074Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 126141,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-31T14:10:42.395063379Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9941de2e064a4a6a7155bfc66cedd2854b8c725b77bb8d4eaf81bef39f951dd7",
	        "ResolvConfPath": "/var/lib/docker/containers/8737208cfcdc52b84ee9a1f2f8218dd282708736b67fe58ea1670ece8d1dd998/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8737208cfcdc52b84ee9a1f2f8218dd282708736b67fe58ea1670ece8d1dd998/hostname",
	        "HostsPath": "/var/lib/docker/containers/8737208cfcdc52b84ee9a1f2f8218dd282708736b67fe58ea1670ece8d1dd998/hosts",
	        "LogPath": "/var/lib/docker/containers/8737208cfcdc52b84ee9a1f2f8218dd282708736b67fe58ea1670ece8d1dd998/8737208cfcdc52b84ee9a1f2f8218dd282708736b67fe58ea1670ece8d1dd998-json.log",
	        "Name": "/addons-214491",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-214491:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-214491",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/915311a8fa0a0eaad3ac7c8fe448c837a5037532661e4c140576dd421e171c66-init/diff:/var/lib/docker/overlay2/5f9b5af8b2f6445fb760404f197bfacc3628584467ea8410c1ba7d01af15f15d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/915311a8fa0a0eaad3ac7c8fe448c837a5037532661e4c140576dd421e171c66/merged",
	                "UpperDir": "/var/lib/docker/overlay2/915311a8fa0a0eaad3ac7c8fe448c837a5037532661e4c140576dd421e171c66/diff",
	                "WorkDir": "/var/lib/docker/overlay2/915311a8fa0a0eaad3ac7c8fe448c837a5037532661e4c140576dd421e171c66/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-214491",
	                "Source": "/var/lib/docker/volumes/addons-214491/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-214491",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "MacAddress": "02:42:c0:a8:31:02",
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-214491",
	                "name.minikube.sigs.k8s.io": "addons-214491",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "721b634ba24d68a6d339c2b8a26a29fd97199a451768780bf2ea91a0c6f9b46e",
	            "SandboxKey": "/var/run/docker/netns/721b634ba24d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-214491": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "8737208cfcdc",
	                        "addons-214491"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "d1e6bce6826f9111e3cf41c720f0114a50da870b1b1e79f7de77451d9eef5e81",
	                    "EndpointID": "1fa5c88a06cd2b1b9192851624c3815808587e69d3a913c081a904702f22870a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "addons-214491",
	                        "8737208cfcdc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-214491 -n addons-214491
helpers_test.go:244: <<< TestAddons/parallel/NvidiaDevicePlugin FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/NvidiaDevicePlugin]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-214491 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-214491 logs -n 25: (1.337381453s)
helpers_test.go:252: TestAddons/parallel/NvidiaDevicePlugin logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-256653   | jenkins | v1.32.0 | 31 Jan 24 14:09 UTC |                     |
	|         | -p download-only-256653              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.32.0 | 31 Jan 24 14:10 UTC | 31 Jan 24 14:10 UTC |
	| delete  | -p download-only-256653              | download-only-256653   | jenkins | v1.32.0 | 31 Jan 24 14:10 UTC | 31 Jan 24 14:10 UTC |
	| start   | -o=json --download-only              | download-only-389052   | jenkins | v1.32.0 | 31 Jan 24 14:10 UTC |                     |
	|         | -p download-only-389052              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.32.0 | 31 Jan 24 14:10 UTC | 31 Jan 24 14:10 UTC |
	| delete  | -p download-only-389052              | download-only-389052   | jenkins | v1.32.0 | 31 Jan 24 14:10 UTC | 31 Jan 24 14:10 UTC |
	| start   | -o=json --download-only              | download-only-755607   | jenkins | v1.32.0 | 31 Jan 24 14:10 UTC |                     |
	|         | -p download-only-755607              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2    |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.32.0 | 31 Jan 24 14:10 UTC | 31 Jan 24 14:10 UTC |
	| delete  | -p download-only-755607              | download-only-755607   | jenkins | v1.32.0 | 31 Jan 24 14:10 UTC | 31 Jan 24 14:10 UTC |
	| delete  | -p download-only-256653              | download-only-256653   | jenkins | v1.32.0 | 31 Jan 24 14:10 UTC | 31 Jan 24 14:10 UTC |
	| delete  | -p download-only-389052              | download-only-389052   | jenkins | v1.32.0 | 31 Jan 24 14:10 UTC | 31 Jan 24 14:10 UTC |
	| delete  | -p download-only-755607              | download-only-755607   | jenkins | v1.32.0 | 31 Jan 24 14:10 UTC | 31 Jan 24 14:10 UTC |
	| start   | --download-only -p                   | download-docker-773457 | jenkins | v1.32.0 | 31 Jan 24 14:10 UTC |                     |
	|         | download-docker-773457               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-773457            | download-docker-773457 | jenkins | v1.32.0 | 31 Jan 24 14:10 UTC | 31 Jan 24 14:10 UTC |
	| start   | --download-only -p                   | binary-mirror-953003   | jenkins | v1.32.0 | 31 Jan 24 14:10 UTC |                     |
	|         | binary-mirror-953003                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:43321               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-953003              | binary-mirror-953003   | jenkins | v1.32.0 | 31 Jan 24 14:10 UTC | 31 Jan 24 14:10 UTC |
	| addons  | disable dashboard -p                 | addons-214491          | jenkins | v1.32.0 | 31 Jan 24 14:10 UTC |                     |
	|         | addons-214491                        |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-214491          | jenkins | v1.32.0 | 31 Jan 24 14:10 UTC |                     |
	|         | addons-214491                        |                        |         |         |                     |                     |
	| start   | -p addons-214491 --wait=true         | addons-214491          | jenkins | v1.32.0 | 31 Jan 24 14:10 UTC | 31 Jan 24 14:12 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --driver=docker        |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                 |                        |         |         |                     |                     |
	| addons  | addons-214491 addons                 | addons-214491          | jenkins | v1.32.0 | 31 Jan 24 14:12 UTC | 31 Jan 24 14:12 UTC |
	|         | disable metrics-server               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-214491 addons disable         | addons-214491          | jenkins | v1.32.0 | 31 Jan 24 14:12 UTC | 31 Jan 24 14:12 UTC |
	|         | helm-tiller --alsologtostderr        |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-214491          | jenkins | v1.32.0 | 31 Jan 24 14:12 UTC | 31 Jan 24 14:12 UTC |
	|         | addons-214491                        |                        |         |         |                     |                     |
	| ip      | addons-214491 ip                     | addons-214491          | jenkins | v1.32.0 | 31 Jan 24 14:12 UTC | 31 Jan 24 14:12 UTC |
	| addons  | addons-214491 addons disable         | addons-214491          | jenkins | v1.32.0 | 31 Jan 24 14:12 UTC | 31 Jan 24 14:12 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin         | addons-214491          | jenkins | v1.32.0 | 31 Jan 24 14:12 UTC |                     |
	|         | -p addons-214491                     |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/31 14:10:20
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0131 14:10:20.239124  125487 out.go:296] Setting OutFile to fd 1 ...
	I0131 14:10:20.239334  125487 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 14:10:20.239350  125487 out.go:309] Setting ErrFile to fd 2...
	I0131 14:10:20.239363  125487 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 14:10:20.239629  125487 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-117277/.minikube/bin
	I0131 14:10:20.240462  125487 out.go:303] Setting JSON to false
	I0131 14:10:20.241579  125487 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":67972,"bootTime":1706642248,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0131 14:10:20.241664  125487 start.go:138] virtualization: kvm guest
	I0131 14:10:20.243921  125487 out.go:177] * [addons-214491] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0131 14:10:20.245617  125487 out.go:177]   - MINIKUBE_LOCATION=18007
	I0131 14:10:20.245679  125487 notify.go:220] Checking for updates...
	I0131 14:10:20.246863  125487 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0131 14:10:20.248720  125487 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18007-117277/kubeconfig
	I0131 14:10:20.250273  125487 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18007-117277/.minikube
	I0131 14:10:20.251702  125487 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0131 14:10:20.253040  125487 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0131 14:10:20.254654  125487 driver.go:392] Setting default libvirt URI to qemu:///system
	I0131 14:10:20.277838  125487 docker.go:122] docker version: linux-25.0.1:Docker Engine - Community
	I0131 14:10:20.277991  125487 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0131 14:10:20.328270  125487 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:51 SystemTime:2024-01-31 14:10:20.31874443 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:25.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0131 14:10:20.328474  125487 docker.go:295] overlay module found
	I0131 14:10:20.330457  125487 out.go:177] * Using the docker driver based on user configuration
	I0131 14:10:20.331840  125487 start.go:298] selected driver: docker
	I0131 14:10:20.331854  125487 start.go:902] validating driver "docker" against <nil>
	I0131 14:10:20.331867  125487 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0131 14:10:20.332730  125487 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0131 14:10:20.386750  125487 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:51 SystemTime:2024-01-31 14:10:20.376984302 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:25.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0131 14:10:20.386913  125487 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0131 14:10:20.387129  125487 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0131 14:10:20.388762  125487 out.go:177] * Using Docker driver with root privileges
	I0131 14:10:20.390236  125487 cni.go:84] Creating CNI manager for ""
	I0131 14:10:20.390265  125487 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0131 14:10:20.390280  125487 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0131 14:10:20.390302  125487 start_flags.go:321] config:
	{Name:addons-214491 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-214491 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containe
rd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 14:10:20.391806  125487 out.go:177] * Starting control plane node addons-214491 in cluster addons-214491
	I0131 14:10:20.392956  125487 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0131 14:10:20.394289  125487 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0131 14:10:20.395440  125487 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0131 14:10:20.395492  125487 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18007-117277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4
	I0131 14:10:20.395507  125487 cache.go:56] Caching tarball of preloaded images
	I0131 14:10:20.395544  125487 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0131 14:10:20.395611  125487 preload.go:174] Found /home/jenkins/minikube-integration/18007-117277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0131 14:10:20.395629  125487 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on containerd
	I0131 14:10:20.396021  125487 profile.go:148] Saving config to /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/addons-214491/config.json ...
	I0131 14:10:20.396060  125487 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/addons-214491/config.json: {Name:mk52e93e7af9c1113f1244badd52ecd8e57ddd60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 14:10:20.412921  125487 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0131 14:10:20.413141  125487 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0131 14:10:20.413170  125487 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory, skipping pull
	I0131 14:10:20.413178  125487 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in cache, skipping pull
	I0131 14:10:20.413187  125487 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 as a tarball
	I0131 14:10:20.413194  125487 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 from local cache
	I0131 14:10:33.256880  125487 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 from cached tarball
	I0131 14:10:33.256945  125487 cache.go:194] Successfully downloaded all kic artifacts
	I0131 14:10:33.256998  125487 start.go:365] acquiring machines lock for addons-214491: {Name:mkbc57ae4120ed19530588052ad7c3467a369857 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0131 14:10:33.257127  125487 start.go:369] acquired machines lock for "addons-214491" in 100.261µs
	I0131 14:10:33.257155  125487 start.go:93] Provisioning new machine with config: &{Name:addons-214491 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-214491 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0131 14:10:33.257281  125487 start.go:125] createHost starting for "" (driver="docker")
	I0131 14:10:33.340725  125487 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0131 14:10:33.341107  125487 start.go:159] libmachine.API.Create for "addons-214491" (driver="docker")
	I0131 14:10:33.341148  125487 client.go:168] LocalClient.Create starting
	I0131 14:10:33.341326  125487 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18007-117277/.minikube/certs/ca.pem
	I0131 14:10:33.607224  125487 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18007-117277/.minikube/certs/cert.pem
	I0131 14:10:33.702617  125487 cli_runner.go:164] Run: docker network inspect addons-214491 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0131 14:10:33.720095  125487 cli_runner.go:211] docker network inspect addons-214491 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0131 14:10:33.720203  125487 network_create.go:281] running [docker network inspect addons-214491] to gather additional debugging logs...
	I0131 14:10:33.720228  125487 cli_runner.go:164] Run: docker network inspect addons-214491
	W0131 14:10:33.737516  125487 cli_runner.go:211] docker network inspect addons-214491 returned with exit code 1
	I0131 14:10:33.737554  125487 network_create.go:284] error running [docker network inspect addons-214491]: docker network inspect addons-214491: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-214491 not found
	I0131 14:10:33.737577  125487 network_create.go:286] output of [docker network inspect addons-214491]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-214491 not found
	
	** /stderr **
	I0131 14:10:33.737695  125487 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0131 14:10:33.757772  125487 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002855640}
	I0131 14:10:33.757851  125487 network_create.go:124] attempt to create docker network addons-214491 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0131 14:10:33.757935  125487 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-214491 addons-214491
	I0131 14:10:34.006693  125487 network_create.go:108] docker network addons-214491 192.168.49.0/24 created
	I0131 14:10:34.006726  125487 kic.go:121] calculated static IP "192.168.49.2" for the "addons-214491" container
	I0131 14:10:34.006798  125487 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0131 14:10:34.022653  125487 cli_runner.go:164] Run: docker volume create addons-214491 --label name.minikube.sigs.k8s.io=addons-214491 --label created_by.minikube.sigs.k8s.io=true
	I0131 14:10:34.127442  125487 oci.go:103] Successfully created a docker volume addons-214491
	I0131 14:10:34.127561  125487 cli_runner.go:164] Run: docker run --rm --name addons-214491-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-214491 --entrypoint /usr/bin/test -v addons-214491:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
	I0131 14:10:36.520531  125487 cli_runner.go:217] Completed: docker run --rm --name addons-214491-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-214491 --entrypoint /usr/bin/test -v addons-214491:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib: (2.392920929s)
	I0131 14:10:36.520566  125487 oci.go:107] Successfully prepared a docker volume addons-214491
	I0131 14:10:36.520586  125487 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0131 14:10:36.520609  125487 kic.go:194] Starting extracting preloaded images to volume ...
	I0131 14:10:36.520669  125487 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18007-117277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-214491:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0131 14:10:42.023161  125487 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18007-117277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-214491:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir: (5.502437986s)
	I0131 14:10:42.023200  125487 kic.go:203] duration metric: took 5.502588 seconds to extract preloaded images to volume
	W0131 14:10:42.023380  125487 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0131 14:10:42.023493  125487 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0131 14:10:42.078520  125487 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-214491 --name addons-214491 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-214491 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-214491 --network addons-214491 --ip 192.168.49.2 --volume addons-214491:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0
	I0131 14:10:42.403293  125487 cli_runner.go:164] Run: docker container inspect addons-214491 --format={{.State.Running}}
	I0131 14:10:42.422808  125487 cli_runner.go:164] Run: docker container inspect addons-214491 --format={{.State.Status}}
	I0131 14:10:42.443613  125487 cli_runner.go:164] Run: docker exec addons-214491 stat /var/lib/dpkg/alternatives/iptables
	I0131 14:10:42.489084  125487 oci.go:144] the created container "addons-214491" has a running status.
	I0131 14:10:42.489116  125487 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18007-117277/.minikube/machines/addons-214491/id_rsa...
	I0131 14:10:42.673543  125487 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18007-117277/.minikube/machines/addons-214491/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0131 14:10:42.693980  125487 cli_runner.go:164] Run: docker container inspect addons-214491 --format={{.State.Status}}
	I0131 14:10:42.714166  125487 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0131 14:10:42.714201  125487 kic_runner.go:114] Args: [docker exec --privileged addons-214491 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0131 14:10:42.769127  125487 cli_runner.go:164] Run: docker container inspect addons-214491 --format={{.State.Status}}
	I0131 14:10:42.787116  125487 machine.go:88] provisioning docker machine ...
	I0131 14:10:42.787167  125487 ubuntu.go:169] provisioning hostname "addons-214491"
	I0131 14:10:42.787238  125487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-214491
	I0131 14:10:42.808226  125487 main.go:141] libmachine: Using SSH client type: native
	I0131 14:10:42.808890  125487 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0131 14:10:42.808924  125487 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-214491 && echo "addons-214491" | sudo tee /etc/hostname
	I0131 14:10:42.809851  125487 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55380->127.0.0.1:32772: read: connection reset by peer
	I0131 14:10:45.959186  125487 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-214491
	
	I0131 14:10:45.959304  125487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-214491
	I0131 14:10:45.977243  125487 main.go:141] libmachine: Using SSH client type: native
	I0131 14:10:45.977653  125487 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80aa00] 0x80d6e0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0131 14:10:45.977673  125487 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-214491' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-214491/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-214491' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0131 14:10:46.110653  125487 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0131 14:10:46.110697  125487 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18007-117277/.minikube CaCertPath:/home/jenkins/minikube-integration/18007-117277/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18007-117277/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18007-117277/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18007-117277/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18007-117277/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18007-117277/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18007-117277/.minikube}
	I0131 14:10:46.110732  125487 ubuntu.go:177] setting up certificates
	I0131 14:10:46.110751  125487 provision.go:83] configureAuth start
	I0131 14:10:46.110847  125487 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-214491
	I0131 14:10:46.130659  125487 provision.go:138] copyHostCerts
	I0131 14:10:46.130759  125487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-117277/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18007-117277/.minikube/ca.pem (1078 bytes)
	I0131 14:10:46.130910  125487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-117277/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18007-117277/.minikube/cert.pem (1123 bytes)
	I0131 14:10:46.130984  125487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18007-117277/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18007-117277/.minikube/key.pem (1679 bytes)
	I0131 14:10:46.131056  125487 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18007-117277/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18007-117277/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18007-117277/.minikube/certs/ca-key.pem org=jenkins.addons-214491 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-214491]
	I0131 14:10:46.252263  125487 provision.go:172] copyRemoteCerts
	I0131 14:10:46.252335  125487 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0131 14:10:46.252377  125487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-214491
	I0131 14:10:46.270276  125487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18007-117277/.minikube/machines/addons-214491/id_rsa Username:docker}
	I0131 14:10:46.366275  125487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-117277/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0131 14:10:46.389093  125487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-117277/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0131 14:10:46.411185  125487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-117277/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0131 14:10:46.435375  125487 provision.go:86] duration metric: configureAuth took 324.60332ms
	I0131 14:10:46.435404  125487 ubuntu.go:193] setting minikube options for container-runtime
	I0131 14:10:46.435580  125487 config.go:182] Loaded profile config "addons-214491": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0131 14:10:46.435592  125487 machine.go:91] provisioned docker machine in 3.648452061s
	I0131 14:10:46.435599  125487 client.go:171] LocalClient.Create took 13.094444828s
	I0131 14:10:46.435617  125487 start.go:167] duration metric: libmachine.API.Create for "addons-214491" took 13.09451664s
	I0131 14:10:46.435626  125487 start.go:300] post-start starting for "addons-214491" (driver="docker")
	I0131 14:10:46.435639  125487 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0131 14:10:46.435681  125487 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0131 14:10:46.435722  125487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-214491
	I0131 14:10:46.454278  125487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18007-117277/.minikube/machines/addons-214491/id_rsa Username:docker}
	I0131 14:10:46.551117  125487 ssh_runner.go:195] Run: cat /etc/os-release
	I0131 14:10:46.554846  125487 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0131 14:10:46.554879  125487 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0131 14:10:46.554887  125487 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0131 14:10:46.554895  125487 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0131 14:10:46.554908  125487 filesync.go:126] Scanning /home/jenkins/minikube-integration/18007-117277/.minikube/addons for local assets ...
	I0131 14:10:46.554995  125487 filesync.go:126] Scanning /home/jenkins/minikube-integration/18007-117277/.minikube/files for local assets ...
	I0131 14:10:46.555022  125487 start.go:303] post-start completed in 119.389166ms
	I0131 14:10:46.555317  125487 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-214491
	I0131 14:10:46.572599  125487 profile.go:148] Saving config to /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/addons-214491/config.json ...
	I0131 14:10:46.572874  125487 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0131 14:10:46.572916  125487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-214491
	I0131 14:10:46.589150  125487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18007-117277/.minikube/machines/addons-214491/id_rsa Username:docker}
	I0131 14:10:46.683020  125487 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0131 14:10:46.688115  125487 start.go:128] duration metric: createHost completed in 13.430804476s
	I0131 14:10:46.688156  125487 start.go:83] releasing machines lock for "addons-214491", held for 13.431016468s
	I0131 14:10:46.688245  125487 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-214491
	I0131 14:10:46.707620  125487 ssh_runner.go:195] Run: cat /version.json
	I0131 14:10:46.707701  125487 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0131 14:10:46.707719  125487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-214491
	I0131 14:10:46.707810  125487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-214491
	I0131 14:10:46.726478  125487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18007-117277/.minikube/machines/addons-214491/id_rsa Username:docker}
	I0131 14:10:46.726797  125487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18007-117277/.minikube/machines/addons-214491/id_rsa Username:docker}
	I0131 14:10:46.911118  125487 ssh_runner.go:195] Run: systemctl --version
	I0131 14:10:46.915630  125487 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0131 14:10:46.920121  125487 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0131 14:10:46.943786  125487 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0131 14:10:46.943860  125487 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0131 14:10:46.968723  125487 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0131 14:10:46.968756  125487 start.go:475] detecting cgroup driver to use...
	I0131 14:10:46.968792  125487 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0131 14:10:46.968837  125487 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0131 14:10:46.979968  125487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0131 14:10:46.989769  125487 docker.go:217] disabling cri-docker service (if available) ...
	I0131 14:10:46.989815  125487 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0131 14:10:47.002041  125487 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0131 14:10:47.014440  125487 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0131 14:10:47.086449  125487 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0131 14:10:47.162776  125487 docker.go:233] disabling docker service ...
	I0131 14:10:47.162841  125487 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0131 14:10:47.181593  125487 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0131 14:10:47.191843  125487 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0131 14:10:47.270227  125487 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0131 14:10:47.345899  125487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0131 14:10:47.356329  125487 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0131 14:10:47.370883  125487 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0131 14:10:47.379738  125487 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0131 14:10:47.388969  125487 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0131 14:10:47.389037  125487 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0131 14:10:47.397985  125487 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0131 14:10:47.406571  125487 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0131 14:10:47.415295  125487 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0131 14:10:47.423763  125487 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0131 14:10:47.431671  125487 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0131 14:10:47.440198  125487 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0131 14:10:47.447942  125487 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0131 14:10:47.455431  125487 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0131 14:10:47.529201  125487 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0131 14:10:47.628443  125487 start.go:522] Will wait 60s for socket path /run/containerd/containerd.sock
	I0131 14:10:47.628536  125487 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0131 14:10:47.632099  125487 start.go:543] Will wait 60s for crictl version
	I0131 14:10:47.632157  125487 ssh_runner.go:195] Run: which crictl
	I0131 14:10:47.635295  125487 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0131 14:10:47.672212  125487 start.go:559] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.26
	RuntimeApiVersion:  v1
	I0131 14:10:47.672303  125487 ssh_runner.go:195] Run: containerd --version
	I0131 14:10:47.699694  125487 ssh_runner.go:195] Run: containerd --version
	I0131 14:10:47.728950  125487 out.go:177] * Preparing Kubernetes v1.28.4 on containerd 1.6.26 ...
	I0131 14:10:47.730225  125487 cli_runner.go:164] Run: docker network inspect addons-214491 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0131 14:10:47.748523  125487 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0131 14:10:47.752873  125487 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 14:10:47.765523  125487 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0131 14:10:47.765637  125487 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 14:10:47.801882  125487 containerd.go:612] all images are preloaded for containerd runtime.
	I0131 14:10:47.801904  125487 containerd.go:519] Images already preloaded, skipping extraction
	I0131 14:10:47.801952  125487 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 14:10:47.835469  125487 containerd.go:612] all images are preloaded for containerd runtime.
	I0131 14:10:47.835494  125487 cache_images.go:84] Images are preloaded, skipping loading
	I0131 14:10:47.835546  125487 ssh_runner.go:195] Run: sudo crictl info
	I0131 14:10:47.870282  125487 cni.go:84] Creating CNI manager for ""
	I0131 14:10:47.870307  125487 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0131 14:10:47.870327  125487 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0131 14:10:47.870345  125487 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-214491 NodeName:addons-214491 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0131 14:10:47.870468  125487 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-214491"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0131 14:10:47.870530  125487 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=addons-214491 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-214491 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0131 14:10:47.870584  125487 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0131 14:10:47.879401  125487 binaries.go:44] Found k8s binaries, skipping transfer
	I0131 14:10:47.879475  125487 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0131 14:10:47.888156  125487 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (385 bytes)
	I0131 14:10:47.907027  125487 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0131 14:10:47.925683  125487 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I0131 14:10:47.945625  125487 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0131 14:10:47.949820  125487 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 14:10:47.962235  125487 certs.go:56] Setting up /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/addons-214491 for IP: 192.168.49.2
	I0131 14:10:47.962281  125487 certs.go:190] acquiring lock for shared ca certs: {Name:mkeb54e6602fe1d63447effea679de9e9af3fb32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 14:10:47.962436  125487 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/18007-117277/.minikube/ca.key
	I0131 14:10:48.028098  125487 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18007-117277/.minikube/ca.crt ...
	I0131 14:10:48.028127  125487 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-117277/.minikube/ca.crt: {Name:mk6c1f82a770c32d1a5ebed3b7d62473053df569 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 14:10:48.028284  125487 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18007-117277/.minikube/ca.key ...
	I0131 14:10:48.028307  125487 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-117277/.minikube/ca.key: {Name:mk7cdc03b4b70e2d67861f62263e1d14dfacc982 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 14:10:48.028389  125487 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/18007-117277/.minikube/proxy-client-ca.key
	I0131 14:10:48.096211  125487 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18007-117277/.minikube/proxy-client-ca.crt ...
	I0131 14:10:48.096246  125487 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-117277/.minikube/proxy-client-ca.crt: {Name:mkc6e03e598486fc42da1a3749132bb565940a3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 14:10:48.096430  125487 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18007-117277/.minikube/proxy-client-ca.key ...
	I0131 14:10:48.096442  125487 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-117277/.minikube/proxy-client-ca.key: {Name:mk390459cac59acd892564bf4f2438e7089b8ba7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 14:10:48.096551  125487 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/addons-214491/client.key
	I0131 14:10:48.096564  125487 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/addons-214491/client.crt with IP's: []
	I0131 14:10:48.176090  125487 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/addons-214491/client.crt ...
	I0131 14:10:48.176126  125487 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/addons-214491/client.crt: {Name:mkaf1bc6cad3f24c99fb866d0261c634911a88ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 14:10:48.176291  125487 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/addons-214491/client.key ...
	I0131 14:10:48.176302  125487 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/addons-214491/client.key: {Name:mkdc3b2e0e2e45e78eb4f47bba9364a3e0e1adfc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 14:10:48.176383  125487 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/addons-214491/apiserver.key.dd3b5fb2
	I0131 14:10:48.176401  125487 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/addons-214491/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0131 14:10:48.297087  125487 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/addons-214491/apiserver.crt.dd3b5fb2 ...
	I0131 14:10:48.297133  125487 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/addons-214491/apiserver.crt.dd3b5fb2: {Name:mk880512e4de89ef95e01190d3858772c87d57f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 14:10:48.297356  125487 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/addons-214491/apiserver.key.dd3b5fb2 ...
	I0131 14:10:48.297378  125487 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/addons-214491/apiserver.key.dd3b5fb2: {Name:mka2d2f7204800413533ebfaf9388e213453580c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 14:10:48.297543  125487 certs.go:337] copying /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/addons-214491/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/addons-214491/apiserver.crt
	I0131 14:10:48.297654  125487 certs.go:341] copying /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/addons-214491/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/addons-214491/apiserver.key
	I0131 14:10:48.297722  125487 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/addons-214491/proxy-client.key
	I0131 14:10:48.297747  125487 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/addons-214491/proxy-client.crt with IP's: []
	I0131 14:10:48.431932  125487 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/addons-214491/proxy-client.crt ...
	I0131 14:10:48.431968  125487 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/addons-214491/proxy-client.crt: {Name:mkf20743cd84e3ec108055df042f5d875327c93a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 14:10:48.432167  125487 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/addons-214491/proxy-client.key ...
	I0131 14:10:48.432189  125487 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/addons-214491/proxy-client.key: {Name:mk73fdc86d07f0e5cd2e26bc026e4cabff8c4358 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 14:10:48.432450  125487 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-117277/.minikube/certs/home/jenkins/minikube-integration/18007-117277/.minikube/certs/ca-key.pem (1679 bytes)
	I0131 14:10:48.432507  125487 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-117277/.minikube/certs/home/jenkins/minikube-integration/18007-117277/.minikube/certs/ca.pem (1078 bytes)
	I0131 14:10:48.432549  125487 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-117277/.minikube/certs/home/jenkins/minikube-integration/18007-117277/.minikube/certs/cert.pem (1123 bytes)
	I0131 14:10:48.432584  125487 certs.go:437] found cert: /home/jenkins/minikube-integration/18007-117277/.minikube/certs/home/jenkins/minikube-integration/18007-117277/.minikube/certs/key.pem (1679 bytes)
	I0131 14:10:48.433194  125487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/addons-214491/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0131 14:10:48.456104  125487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/addons-214491/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0131 14:10:48.477232  125487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/addons-214491/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0131 14:10:48.498530  125487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/addons-214491/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0131 14:10:48.519488  125487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-117277/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0131 14:10:48.540612  125487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-117277/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0131 14:10:48.561983  125487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-117277/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0131 14:10:48.582983  125487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-117277/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0131 14:10:48.603865  125487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18007-117277/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0131 14:10:48.625420  125487 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0131 14:10:48.641693  125487 ssh_runner.go:195] Run: openssl version
	I0131 14:10:48.646817  125487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0131 14:10:48.655219  125487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0131 14:10:48.658268  125487 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 31 14:10 /usr/share/ca-certificates/minikubeCA.pem
	I0131 14:10:48.658323  125487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0131 14:10:48.664451  125487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0131 14:10:48.672791  125487 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0131 14:10:48.675789  125487 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0131 14:10:48.675841  125487 kubeadm.go:404] StartCluster: {Name:addons-214491 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-214491 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 14:10:48.675923  125487 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0131 14:10:48.675962  125487 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 14:10:48.708938  125487 cri.go:89] found id: ""
	I0131 14:10:48.709004  125487 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0131 14:10:48.717428  125487 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 14:10:48.725359  125487 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0131 14:10:48.725418  125487 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 14:10:48.733063  125487 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 14:10:48.733134  125487 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0131 14:10:48.777228  125487 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0131 14:10:48.777322  125487 kubeadm.go:322] [preflight] Running pre-flight checks
	I0131 14:10:48.811965  125487 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0131 14:10:48.812030  125487 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1049-gcp
	I0131 14:10:48.812061  125487 kubeadm.go:322] OS: Linux
	I0131 14:10:48.812105  125487 kubeadm.go:322] CGROUPS_CPU: enabled
	I0131 14:10:48.812159  125487 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0131 14:10:48.812212  125487 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0131 14:10:48.812251  125487 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0131 14:10:48.812296  125487 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0131 14:10:48.812343  125487 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0131 14:10:48.812381  125487 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0131 14:10:48.812421  125487 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0131 14:10:48.812477  125487 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0131 14:10:48.874151  125487 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0131 14:10:48.874277  125487 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0131 14:10:48.874372  125487 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0131 14:10:49.065737  125487 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0131 14:10:49.068481  125487 out.go:204]   - Generating certificates and keys ...
	I0131 14:10:49.068587  125487 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0131 14:10:49.068678  125487 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0131 14:10:49.143900  125487 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0131 14:10:49.229617  125487 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0131 14:10:49.503876  125487 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0131 14:10:49.593614  125487 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0131 14:10:49.841803  125487 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0131 14:10:49.841998  125487 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-214491 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0131 14:10:49.889192  125487 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0131 14:10:49.889369  125487 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-214491 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0131 14:10:50.099045  125487 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0131 14:10:50.202427  125487 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0131 14:10:50.303317  125487 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0131 14:10:50.303391  125487 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0131 14:10:50.480627  125487 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0131 14:10:50.573920  125487 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0131 14:10:50.781732  125487 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0131 14:10:50.916184  125487 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0131 14:10:50.917629  125487 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0131 14:10:50.919883  125487 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0131 14:10:50.922032  125487 out.go:204]   - Booting up control plane ...
	I0131 14:10:50.922173  125487 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0131 14:10:50.922295  125487 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0131 14:10:50.922385  125487 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0131 14:10:50.934072  125487 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0131 14:10:50.934919  125487 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0131 14:10:50.934991  125487 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0131 14:10:51.015347  125487 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0131 14:10:56.017840  125487 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.002597 seconds
	I0131 14:10:56.017949  125487 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0131 14:10:56.031951  125487 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0131 14:10:56.552919  125487 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0131 14:10:56.553119  125487 kubeadm.go:322] [mark-control-plane] Marking the node addons-214491 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0131 14:10:57.063391  125487 kubeadm.go:322] [bootstrap-token] Using token: b06pqs.qtbob5qs89luanua
	I0131 14:10:57.064635  125487 out.go:204]   - Configuring RBAC rules ...
	I0131 14:10:57.064791  125487 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0131 14:10:57.070016  125487 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0131 14:10:57.078073  125487 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0131 14:10:57.081596  125487 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0131 14:10:57.086376  125487 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0131 14:10:57.089775  125487 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0131 14:10:57.101384  125487 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0131 14:10:57.402550  125487 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0131 14:10:57.509253  125487 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0131 14:10:57.510459  125487 kubeadm.go:322] 
	I0131 14:10:57.510551  125487 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0131 14:10:57.510561  125487 kubeadm.go:322] 
	I0131 14:10:57.510686  125487 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0131 14:10:57.510709  125487 kubeadm.go:322] 
	I0131 14:10:57.510737  125487 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0131 14:10:57.510810  125487 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0131 14:10:57.510871  125487 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0131 14:10:57.510880  125487 kubeadm.go:322] 
	I0131 14:10:57.510951  125487 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0131 14:10:57.510958  125487 kubeadm.go:322] 
	I0131 14:10:57.511023  125487 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0131 14:10:57.511030  125487 kubeadm.go:322] 
	I0131 14:10:57.511093  125487 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0131 14:10:57.511188  125487 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0131 14:10:57.511280  125487 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0131 14:10:57.511288  125487 kubeadm.go:322] 
	I0131 14:10:57.511399  125487 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0131 14:10:57.511496  125487 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0131 14:10:57.511503  125487 kubeadm.go:322] 
	I0131 14:10:57.511598  125487 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token b06pqs.qtbob5qs89luanua \
	I0131 14:10:57.511741  125487 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:919e83d6c7eedee9017f84f7359a65c9a85e797008d7d7c0cf490ef8044657cb \
	I0131 14:10:57.511766  125487 kubeadm.go:322] 	--control-plane 
	I0131 14:10:57.511773  125487 kubeadm.go:322] 
	I0131 14:10:57.511881  125487 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0131 14:10:57.511888  125487 kubeadm.go:322] 
	I0131 14:10:57.512020  125487 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token b06pqs.qtbob5qs89luanua \
	I0131 14:10:57.512192  125487 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:919e83d6c7eedee9017f84f7359a65c9a85e797008d7d7c0cf490ef8044657cb 
	I0131 14:10:57.515022  125487 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1049-gcp\n", err: exit status 1
	I0131 14:10:57.515176  125487 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0131 14:10:57.515212  125487 cni.go:84] Creating CNI manager for ""
	I0131 14:10:57.515231  125487 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0131 14:10:57.516994  125487 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0131 14:10:57.518350  125487 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0131 14:10:57.524188  125487 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0131 14:10:57.524226  125487 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0131 14:10:57.612055  125487 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0131 14:10:58.353756  125487 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0131 14:10:58.353851  125487 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 14:10:58.353879  125487 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=274d15c48919de599d1c531208ca35671bcbf218 minikube.k8s.io/name=addons-214491 minikube.k8s.io/updated_at=2024_01_31T14_10_58_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 14:10:58.361795  125487 ops.go:34] apiserver oom_adj: -16
	I0131 14:10:58.438468  125487 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 14:10:58.939033  125487 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 14:10:59.439434  125487 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 14:10:59.939454  125487 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 14:11:00.438654  125487 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 14:11:00.938533  125487 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 14:11:01.438556  125487 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 14:11:01.938944  125487 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 14:11:02.438797  125487 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 14:11:02.939398  125487 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 14:11:03.439413  125487 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 14:11:03.939560  125487 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 14:11:04.438813  125487 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 14:11:04.939337  125487 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 14:11:05.439313  125487 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 14:11:05.939387  125487 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 14:11:06.439137  125487 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 14:11:06.939316  125487 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 14:11:07.439395  125487 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 14:11:07.938747  125487 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 14:11:08.438929  125487 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 14:11:08.939341  125487 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 14:11:09.439141  125487 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 14:11:09.938594  125487 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 14:11:10.013637  125487 kubeadm.go:1088] duration metric: took 11.65984962s to wait for elevateKubeSystemPrivileges.
	I0131 14:11:10.013681  125487 kubeadm.go:406] StartCluster complete in 21.337849721s
	I0131 14:11:10.013707  125487 settings.go:142] acquiring lock: {Name:mk5c6ffd872c98cabf7f960ae9ce352ca09ff7c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 14:11:10.013847  125487 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18007-117277/kubeconfig
	I0131 14:11:10.014387  125487 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-117277/kubeconfig: {Name:mkc4792adf9cf6b0aa335f112c622bcab16b5821 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 14:11:10.014657  125487 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0131 14:11:10.014910  125487 config.go:182] Loaded profile config "addons-214491": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0131 14:11:10.014858  125487 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0131 14:11:10.015007  125487 addons.go:69] Setting yakd=true in profile "addons-214491"
	I0131 14:11:10.015031  125487 addons.go:234] Setting addon yakd=true in "addons-214491"
	I0131 14:11:10.015088  125487 host.go:66] Checking if "addons-214491" exists ...
	I0131 14:11:10.015148  125487 addons.go:69] Setting ingress-dns=true in profile "addons-214491"
	I0131 14:11:10.015178  125487 addons.go:234] Setting addon ingress-dns=true in "addons-214491"
	I0131 14:11:10.015239  125487 host.go:66] Checking if "addons-214491" exists ...
	I0131 14:11:10.015617  125487 cli_runner.go:164] Run: docker container inspect addons-214491 --format={{.State.Status}}
	I0131 14:11:10.015662  125487 cli_runner.go:164] Run: docker container inspect addons-214491 --format={{.State.Status}}
	I0131 14:11:10.015841  125487 addons.go:69] Setting default-storageclass=true in profile "addons-214491"
	I0131 14:11:10.015878  125487 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-214491"
	I0131 14:11:10.015878  125487 addons.go:69] Setting cloud-spanner=true in profile "addons-214491"
	I0131 14:11:10.015908  125487 addons.go:234] Setting addon cloud-spanner=true in "addons-214491"
	I0131 14:11:10.015968  125487 host.go:66] Checking if "addons-214491" exists ...
	I0131 14:11:10.016117  125487 addons.go:69] Setting inspektor-gadget=true in profile "addons-214491"
	I0131 14:11:10.016142  125487 addons.go:234] Setting addon inspektor-gadget=true in "addons-214491"
	I0131 14:11:10.016175  125487 cli_runner.go:164] Run: docker container inspect addons-214491 --format={{.State.Status}}
	I0131 14:11:10.016187  125487 host.go:66] Checking if "addons-214491" exists ...
	I0131 14:11:10.016449  125487 cli_runner.go:164] Run: docker container inspect addons-214491 --format={{.State.Status}}
	I0131 14:11:10.016644  125487 cli_runner.go:164] Run: docker container inspect addons-214491 --format={{.State.Status}}
	I0131 14:11:10.016640  125487 addons.go:69] Setting storage-provisioner=true in profile "addons-214491"
	I0131 14:11:10.016669  125487 addons.go:234] Setting addon storage-provisioner=true in "addons-214491"
	I0131 14:11:10.016652  125487 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-214491"
	I0131 14:11:10.016678  125487 addons.go:69] Setting volumesnapshots=true in profile "addons-214491"
	I0131 14:11:10.016718  125487 host.go:66] Checking if "addons-214491" exists ...
	I0131 14:11:10.016724  125487 addons.go:234] Setting addon volumesnapshots=true in "addons-214491"
	I0131 14:11:10.016748  125487 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-214491"
	I0131 14:11:10.016803  125487 host.go:66] Checking if "addons-214491" exists ...
	I0131 14:11:10.016805  125487 host.go:66] Checking if "addons-214491" exists ...
	I0131 14:11:10.016884  125487 addons.go:69] Setting gcp-auth=true in profile "addons-214491"
	I0131 14:11:10.016909  125487 mustload.go:65] Loading cluster: addons-214491
	I0131 14:11:10.017120  125487 config.go:182] Loaded profile config "addons-214491": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0131 14:11:10.017160  125487 cli_runner.go:164] Run: docker container inspect addons-214491 --format={{.State.Status}}
	I0131 14:11:10.017262  125487 cli_runner.go:164] Run: docker container inspect addons-214491 --format={{.State.Status}}
	I0131 14:11:10.017367  125487 cli_runner.go:164] Run: docker container inspect addons-214491 --format={{.State.Status}}
	I0131 14:11:10.017432  125487 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-214491"
	I0131 14:11:10.017453  125487 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-214491"
	I0131 14:11:10.017506  125487 host.go:66] Checking if "addons-214491" exists ...
	I0131 14:11:10.017946  125487 cli_runner.go:164] Run: docker container inspect addons-214491 --format={{.State.Status}}
	I0131 14:11:10.019161  125487 addons.go:69] Setting registry=true in profile "addons-214491"
	I0131 14:11:10.019239  125487 addons.go:234] Setting addon registry=true in "addons-214491"
	I0131 14:11:10.019303  125487 host.go:66] Checking if "addons-214491" exists ...
	I0131 14:11:10.019803  125487 cli_runner.go:164] Run: docker container inspect addons-214491 --format={{.State.Status}}
	I0131 14:11:10.021521  125487 addons.go:69] Setting metrics-server=true in profile "addons-214491"
	I0131 14:11:10.021615  125487 addons.go:234] Setting addon metrics-server=true in "addons-214491"
	I0131 14:11:10.021713  125487 host.go:66] Checking if "addons-214491" exists ...
	I0131 14:11:10.022172  125487 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-214491"
	I0131 14:11:10.022220  125487 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-214491"
	I0131 14:11:10.022341  125487 cli_runner.go:164] Run: docker container inspect addons-214491 --format={{.State.Status}}
	I0131 14:11:10.022589  125487 cli_runner.go:164] Run: docker container inspect addons-214491 --format={{.State.Status}}
	I0131 14:11:10.022673  125487 addons.go:69] Setting ingress=true in profile "addons-214491"
	I0131 14:11:10.022695  125487 addons.go:234] Setting addon ingress=true in "addons-214491"
	I0131 14:11:10.022760  125487 host.go:66] Checking if "addons-214491" exists ...
	I0131 14:11:10.023021  125487 addons.go:69] Setting helm-tiller=true in profile "addons-214491"
	I0131 14:11:10.023075  125487 addons.go:234] Setting addon helm-tiller=true in "addons-214491"
	I0131 14:11:10.023147  125487 host.go:66] Checking if "addons-214491" exists ...
	I0131 14:11:10.017262  125487 cli_runner.go:164] Run: docker container inspect addons-214491 --format={{.State.Status}}
	I0131 14:11:10.055440  125487 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.24.0
	I0131 14:11:10.056865  125487 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0131 14:11:10.056905  125487 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0131 14:11:10.056986  125487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-214491
	I0131 14:11:10.055152  125487 host.go:66] Checking if "addons-214491" exists ...
	I0131 14:11:10.059819  125487 out.go:177]   - Using image docker.io/registry:2.8.3
	I0131 14:11:10.060071  125487 addons.go:234] Setting addon default-storageclass=true in "addons-214491"
	I0131 14:11:10.059211  125487 cli_runner.go:164] Run: docker container inspect addons-214491 --format={{.State.Status}}
	I0131 14:11:10.058838  125487 cli_runner.go:164] Run: docker container inspect addons-214491 --format={{.State.Status}}
	I0131 14:11:10.068383  125487 host.go:66] Checking if "addons-214491" exists ...
	I0131 14:11:10.069688  125487 cli_runner.go:164] Run: docker container inspect addons-214491 --format={{.State.Status}}
	I0131 14:11:10.073570  125487 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0131 14:11:10.075390  125487 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.13
	I0131 14:11:10.075399  125487 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0131 14:11:10.073977  125487 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I0131 14:11:10.068403  125487 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0131 14:11:10.077166  125487 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0131 14:11:10.077291  125487 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0131 14:11:10.078572  125487 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0131 14:11:10.078665  125487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-214491
	I0131 14:11:10.081091  125487 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0131 14:11:10.081120  125487 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0131 14:11:10.081190  125487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-214491
	I0131 14:11:10.083220  125487 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0131 14:11:10.083254  125487 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0131 14:11:10.078990  125487 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0131 14:11:10.079059  125487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-214491
	I0131 14:11:10.083324  125487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-214491
	I0131 14:11:10.085683  125487 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0131 14:11:10.086169  125487 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0131 14:11:10.085826  125487 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0131 14:11:10.087641  125487 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0131 14:11:10.087665  125487 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0131 14:11:10.087740  125487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-214491
	I0131 14:11:10.086509  125487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-214491
	I0131 14:11:10.100550  125487 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 14:11:10.103223  125487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18007-117277/.minikube/machines/addons-214491/id_rsa Username:docker}
	I0131 14:11:10.105426  125487 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 14:11:10.105458  125487 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0131 14:11:10.105553  125487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-214491
	I0131 14:11:10.123467  125487 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0131 14:11:10.123503  125487 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0131 14:11:10.123571  125487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-214491
	I0131 14:11:10.132227  125487 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0131 14:11:10.132737  125487 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-214491"
	I0131 14:11:10.134577  125487 host.go:66] Checking if "addons-214491" exists ...
	I0131 14:11:10.135127  125487 cli_runner.go:164] Run: docker container inspect addons-214491 --format={{.State.Status}}
	I0131 14:11:10.135311  125487 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0131 14:11:10.144294  125487 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0131 14:11:10.146229  125487 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0131 14:11:10.147468  125487 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0131 14:11:10.148686  125487 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0131 14:11:10.150203  125487 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0131 14:11:10.156042  125487 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0131 14:11:10.157946  125487 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0131 14:11:10.159270  125487 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0131 14:11:10.159286  125487 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0131 14:11:10.158029  125487 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0131 14:11:10.159310  125487 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0131 14:11:10.159348  125487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-214491
	I0131 14:11:10.159359  125487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-214491
	I0131 14:11:10.158367  125487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18007-117277/.minikube/machines/addons-214491/id_rsa Username:docker}
	I0131 14:11:10.159123  125487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18007-117277/.minikube/machines/addons-214491/id_rsa Username:docker}
	I0131 14:11:10.162885  125487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18007-117277/.minikube/machines/addons-214491/id_rsa Username:docker}
	I0131 14:11:10.165154  125487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18007-117277/.minikube/machines/addons-214491/id_rsa Username:docker}
	I0131 14:11:10.168099  125487 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0131 14:11:10.169562  125487 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0131 14:11:10.169587  125487 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0131 14:11:10.169645  125487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-214491
	I0131 14:11:10.171646  125487 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0131 14:11:10.172988  125487 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.5
	I0131 14:11:10.174395  125487 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0131 14:11:10.175821  125487 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0131 14:11:10.175848  125487 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I0131 14:11:10.175922  125487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-214491
	I0131 14:11:10.177592  125487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18007-117277/.minikube/machines/addons-214491/id_rsa Username:docker}
	I0131 14:11:10.181059  125487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18007-117277/.minikube/machines/addons-214491/id_rsa Username:docker}
	I0131 14:11:10.184743  125487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18007-117277/.minikube/machines/addons-214491/id_rsa Username:docker}
	I0131 14:11:10.192475  125487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18007-117277/.minikube/machines/addons-214491/id_rsa Username:docker}
	I0131 14:11:10.196313  125487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18007-117277/.minikube/machines/addons-214491/id_rsa Username:docker}
	I0131 14:11:10.202567  125487 out.go:177]   - Using image docker.io/busybox:stable
	I0131 14:11:10.206128  125487 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0131 14:11:10.207949  125487 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0131 14:11:10.207971  125487 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0131 14:11:10.208041  125487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-214491
	I0131 14:11:10.207521  125487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18007-117277/.minikube/machines/addons-214491/id_rsa Username:docker}
	I0131 14:11:10.207526  125487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18007-117277/.minikube/machines/addons-214491/id_rsa Username:docker}
	I0131 14:11:10.207521  125487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18007-117277/.minikube/machines/addons-214491/id_rsa Username:docker}
	W0131 14:11:10.213697  125487 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0131 14:11:10.213738  125487 retry.go:31] will retry after 176.610455ms: ssh: handshake failed: EOF
	W0131 14:11:10.213775  125487 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0131 14:11:10.213799  125487 retry.go:31] will retry after 243.480794ms: ssh: handshake failed: EOF
	I0131 14:11:10.243370  125487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18007-117277/.minikube/machines/addons-214491/id_rsa Username:docker}
	I0131 14:11:10.412745  125487 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0131 14:11:10.518350  125487 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0131 14:11:10.518381  125487 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0131 14:11:10.521594  125487 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-214491" context rescaled to 1 replicas
	I0131 14:11:10.521657  125487 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0131 14:11:10.523165  125487 out.go:177] * Verifying Kubernetes components...
	I0131 14:11:10.524922  125487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 14:11:10.609386  125487 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0131 14:11:10.609426  125487 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0131 14:11:10.610012  125487 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0131 14:11:10.624875  125487 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0131 14:11:10.624918  125487 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0131 14:11:10.813834  125487 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0131 14:11:10.814993  125487 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0131 14:11:10.815024  125487 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0131 14:11:10.817346  125487 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0131 14:11:10.817375  125487 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0131 14:11:10.819036  125487 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0131 14:11:10.822634  125487 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0131 14:11:10.906971  125487 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0131 14:11:10.907320  125487 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0131 14:11:10.907348  125487 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0131 14:11:10.910049  125487 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0131 14:11:11.009771  125487 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0131 14:11:11.009808  125487 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0131 14:11:11.011493  125487 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0131 14:11:11.011526  125487 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0131 14:11:11.024700  125487 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 14:11:11.105441  125487 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0131 14:11:11.105498  125487 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0131 14:11:11.113400  125487 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0131 14:11:11.113435  125487 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0131 14:11:11.209370  125487 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0131 14:11:11.209426  125487 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0131 14:11:11.306833  125487 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0131 14:11:11.306871  125487 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0131 14:11:11.308707  125487 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0131 14:11:11.308738  125487 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0131 14:11:11.405834  125487 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0131 14:11:11.405875  125487 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0131 14:11:11.418136  125487 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0131 14:11:11.418178  125487 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0131 14:11:11.608424  125487 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0131 14:11:11.608465  125487 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0131 14:11:11.612813  125487 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0131 14:11:11.705516  125487 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0131 14:11:11.705552  125487 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0131 14:11:11.721201  125487 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0131 14:11:11.721237  125487 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0131 14:11:11.822114  125487 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0131 14:11:11.822149  125487 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0131 14:11:11.924152  125487 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 14:11:11.924190  125487 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0131 14:11:12.010412  125487 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0131 14:11:12.010449  125487 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0131 14:11:12.206875  125487 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0131 14:11:12.206978  125487 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0131 14:11:12.213716  125487 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0131 14:11:12.404355  125487 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0131 14:11:12.404395  125487 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0131 14:11:12.407811  125487 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0131 14:11:12.407847  125487 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0131 14:11:12.408224  125487 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0131 14:11:12.408252  125487 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0131 14:11:12.412459  125487 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0131 14:11:12.412490  125487 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0131 14:11:12.423802  125487 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 14:11:12.817358  125487 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0131 14:11:12.908687  125487 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0131 14:11:12.908723  125487 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0131 14:11:12.923984  125487 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0131 14:11:13.217168  125487 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.804317398s)
	I0131 14:11:13.217249  125487 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0131 14:11:13.217285  125487 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.692337667s)
	I0131 14:11:13.218417  125487 node_ready.go:35] waiting up to 6m0s for node "addons-214491" to be "Ready" ...
	I0131 14:11:13.218671  125487 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.608630575s)
	I0131 14:11:13.222182  125487 node_ready.go:49] node "addons-214491" has status "Ready":"True"
	I0131 14:11:13.222223  125487 node_ready.go:38] duration metric: took 3.773062ms waiting for node "addons-214491" to be "Ready" ...
	I0131 14:11:13.222235  125487 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 14:11:13.230834  125487 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-dhqkg" in "kube-system" namespace to be "Ready" ...
	I0131 14:11:13.303808  125487 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0131 14:11:13.303854  125487 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0131 14:11:13.324676  125487 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0131 14:11:13.324790  125487 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0131 14:11:13.715280  125487 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0131 14:11:13.905866  125487 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0131 14:11:13.905909  125487 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0131 14:11:14.214152  125487 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.400266253s)
	I0131 14:11:14.706825  125487 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0131 14:11:14.706867  125487 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0131 14:11:15.221200  125487 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0131 14:11:15.221311  125487 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0131 14:11:15.404658  125487 pod_ready.go:102] pod "coredns-5dd5756b68-dhqkg" in "kube-system" namespace has status "Ready":"False"
	I0131 14:11:15.521177  125487 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0131 14:11:15.521272  125487 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0131 14:11:15.804479  125487 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.985395233s)
	I0131 14:11:16.113763  125487 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0131 14:11:16.909895  125487 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0131 14:11:16.909982  125487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-214491
	I0131 14:11:16.942318  125487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18007-117277/.minikube/machines/addons-214491/id_rsa Username:docker}
	I0131 14:11:17.719427  125487 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0131 14:11:17.823180  125487 pod_ready.go:102] pod "coredns-5dd5756b68-dhqkg" in "kube-system" namespace has status "Ready":"False"
	I0131 14:11:17.919588  125487 addons.go:234] Setting addon gcp-auth=true in "addons-214491"
	I0131 14:11:17.919725  125487 host.go:66] Checking if "addons-214491" exists ...
	I0131 14:11:17.920472  125487 cli_runner.go:164] Run: docker container inspect addons-214491 --format={{.State.Status}}
	I0131 14:11:17.940154  125487 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0131 14:11:17.940204  125487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-214491
	I0131 14:11:17.957372  125487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18007-117277/.minikube/machines/addons-214491/id_rsa Username:docker}
	I0131 14:11:19.926804  125487 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.104114158s)
	I0131 14:11:19.926860  125487 addons.go:470] Verifying addon ingress=true in "addons-214491"
	I0131 14:11:19.928798  125487 out.go:177] * Verifying ingress addon...
	I0131 14:11:19.926951  125487 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.016870973s)
	I0131 14:11:19.927022  125487 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.902295612s)
	I0131 14:11:19.927046  125487 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.019866766s)
	I0131 14:11:19.927126  125487 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.314200988s)
	I0131 14:11:19.927161  125487 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.713402308s)
	I0131 14:11:19.927259  125487 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.503420588s)
	I0131 14:11:19.927305  125487 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.109907374s)
	I0131 14:11:19.927415  125487 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.003383335s)
	I0131 14:11:19.927559  125487 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.212178691s)
	I0131 14:11:19.928912  125487 addons.go:470] Verifying addon metrics-server=true in "addons-214491"
	I0131 14:11:19.928950  125487 addons.go:470] Verifying addon registry=true in "addons-214491"
	W0131 14:11:19.929002  125487 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0131 14:11:19.931803  125487 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0131 14:11:19.932261  125487 out.go:177] * Verifying registry addon...
	I0131 14:11:19.932287  125487 retry.go:31] will retry after 239.16958ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0131 14:11:19.933760  125487 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-214491 service yakd-dashboard -n yakd-dashboard
	
	I0131 14:11:19.937414  125487 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0131 14:11:20.004237  125487 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0131 14:11:20.004851  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:20.010288  125487 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0131 14:11:20.010320  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 14:11:20.174101  125487 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0131 14:11:20.305414  125487 pod_ready.go:102] pod "coredns-5dd5756b68-dhqkg" in "kube-system" namespace has status "Ready":"False"
	I0131 14:11:20.516704  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:20.521154  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 14:11:20.938321  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:21.010054  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 14:11:21.439659  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:21.514253  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 14:11:21.827161  125487 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.713291803s)
	I0131 14:11:21.827264  125487 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.887078266s)
	I0131 14:11:21.827281  125487 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-214491"
	I0131 14:11:21.829155  125487 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0131 14:11:21.831912  125487 out.go:177] * Verifying csi-hostpath-driver addon...
	I0131 14:11:21.833365  125487 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0131 14:11:21.834870  125487 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0131 14:11:21.834953  125487 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0131 14:11:21.834086  125487 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0131 14:11:21.908948  125487 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0131 14:11:21.909032  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:21.928728  125487 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0131 14:11:21.928759  125487 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0131 14:11:22.003677  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:22.011624  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 14:11:22.020497  125487 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0131 14:11:22.020630  125487 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I0131 14:11:22.110416  125487 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0131 14:11:22.310021  125487 pod_ready.go:102] pod "coredns-5dd5756b68-dhqkg" in "kube-system" namespace has status "Ready":"False"
	I0131 14:11:22.407269  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:22.505807  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:22.511325  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 14:11:22.818136  125487 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.643950223s)
	I0131 14:11:22.841579  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:22.939577  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:23.011473  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 14:11:23.322418  125487 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.211952152s)
	I0131 14:11:23.323354  125487 addons.go:470] Verifying addon gcp-auth=true in "addons-214491"
	I0131 14:11:23.324890  125487 out.go:177] * Verifying gcp-auth addon...
	I0131 14:11:23.327175  125487 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0131 14:11:23.330516  125487 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0131 14:11:23.330545  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:23.405068  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:23.437845  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:23.510412  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 14:11:23.831448  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:23.840727  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:23.938065  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:24.010748  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 14:11:24.331757  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:24.341273  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:24.438671  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:24.510107  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 14:11:24.738479  125487 pod_ready.go:92] pod "coredns-5dd5756b68-dhqkg" in "kube-system" namespace has status "Ready":"True"
	I0131 14:11:24.738505  125487 pod_ready.go:81] duration metric: took 11.50764341s waiting for pod "coredns-5dd5756b68-dhqkg" in "kube-system" namespace to be "Ready" ...
	I0131 14:11:24.738515  125487 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-214491" in "kube-system" namespace to be "Ready" ...
	I0131 14:11:24.744197  125487 pod_ready.go:92] pod "etcd-addons-214491" in "kube-system" namespace has status "Ready":"True"
	I0131 14:11:24.744229  125487 pod_ready.go:81] duration metric: took 5.706672ms waiting for pod "etcd-addons-214491" in "kube-system" namespace to be "Ready" ...
	I0131 14:11:24.744248  125487 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-214491" in "kube-system" namespace to be "Ready" ...
	I0131 14:11:24.752331  125487 pod_ready.go:92] pod "kube-apiserver-addons-214491" in "kube-system" namespace has status "Ready":"True"
	I0131 14:11:24.752356  125487 pod_ready.go:81] duration metric: took 8.099299ms waiting for pod "kube-apiserver-addons-214491" in "kube-system" namespace to be "Ready" ...
	I0131 14:11:24.752368  125487 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-214491" in "kube-system" namespace to be "Ready" ...
	I0131 14:11:24.757550  125487 pod_ready.go:92] pod "kube-controller-manager-addons-214491" in "kube-system" namespace has status "Ready":"True"
	I0131 14:11:24.757578  125487 pod_ready.go:81] duration metric: took 5.202559ms waiting for pod "kube-controller-manager-addons-214491" in "kube-system" namespace to be "Ready" ...
	I0131 14:11:24.757592  125487 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6sbxl" in "kube-system" namespace to be "Ready" ...
	I0131 14:11:24.762934  125487 pod_ready.go:92] pod "kube-proxy-6sbxl" in "kube-system" namespace has status "Ready":"True"
	I0131 14:11:24.762958  125487 pod_ready.go:81] duration metric: took 5.357685ms waiting for pod "kube-proxy-6sbxl" in "kube-system" namespace to be "Ready" ...
	I0131 14:11:24.762971  125487 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-214491" in "kube-system" namespace to be "Ready" ...
	I0131 14:11:24.831677  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:24.841739  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:24.938561  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:25.010463  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 14:11:25.135292  125487 pod_ready.go:92] pod "kube-scheduler-addons-214491" in "kube-system" namespace has status "Ready":"True"
	I0131 14:11:25.135324  125487 pod_ready.go:81] duration metric: took 372.344643ms waiting for pod "kube-scheduler-addons-214491" in "kube-system" namespace to be "Ready" ...
	I0131 14:11:25.135339  125487 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-lmr4m" in "kube-system" namespace to be "Ready" ...
	I0131 14:11:25.330757  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:25.340024  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:25.438991  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:25.510140  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 14:11:25.832044  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:25.841063  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:25.938221  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:26.010464  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 14:11:26.331927  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:26.340811  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:26.438997  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:26.509881  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 14:11:26.831692  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:26.840782  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:26.938652  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:27.009797  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 14:11:27.140655  125487 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-lmr4m" in "kube-system" namespace has status "Ready":"False"
	I0131 14:11:27.330655  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:27.339814  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:27.438030  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:27.510506  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 14:11:27.831249  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:27.841864  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:27.939332  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:28.010077  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 14:11:28.331957  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:28.341273  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:28.438498  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:28.510374  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 14:11:28.831118  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:28.840155  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:28.938072  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:29.009530  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 14:11:29.141402  125487 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-lmr4m" in "kube-system" namespace has status "Ready":"False"
	I0131 14:11:29.330957  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:29.341291  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:29.438579  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:29.511036  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 14:11:29.831841  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:29.841844  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:29.938776  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:30.010630  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 14:11:30.331697  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:30.342942  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:30.437420  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:30.509994  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 14:11:30.831333  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:30.842008  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:30.939880  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:31.010029  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 14:11:31.144159  125487 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-lmr4m" in "kube-system" namespace has status "Ready":"False"
	I0131 14:11:31.331977  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:31.341058  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:31.438727  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:31.510245  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 14:11:31.832126  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:31.841393  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:31.940218  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:32.010203  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 14:11:32.336994  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:32.341363  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:32.438447  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:32.510779  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 14:11:32.831083  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:32.840911  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:32.937729  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:33.009250  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 14:11:33.331141  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:33.340665  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:33.441676  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:33.509708  125487 kapi.go:107] duration metric: took 13.505475467s to wait for kubernetes.io/minikube-addons=registry ...
	I0131 14:11:33.642842  125487 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-lmr4m" in "kube-system" namespace has status "Ready":"False"
	I0131 14:11:33.831804  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:33.841714  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:33.938603  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:34.331125  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:34.345205  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:34.438100  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:34.832124  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:34.841294  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:34.938454  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:35.331141  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:35.342090  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:35.438541  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:35.831586  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:35.840976  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:35.938244  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:36.141622  125487 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-lmr4m" in "kube-system" namespace has status "Ready":"False"
	I0131 14:11:36.331217  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:36.342275  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:36.438800  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:36.831622  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:36.840858  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:36.940018  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:37.331290  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:37.341292  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:37.438447  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:37.830801  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:37.840000  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:37.938815  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:38.142608  125487 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-lmr4m" in "kube-system" namespace has status "Ready":"False"
	I0131 14:11:38.332114  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:38.341055  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:38.438357  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:38.832357  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:38.841881  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:38.940524  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:39.331332  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:39.341053  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:39.439305  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:39.831435  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:39.842091  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:39.938518  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:40.142701  125487 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-lmr4m" in "kube-system" namespace has status "Ready":"False"
	I0131 14:11:40.331594  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:40.340714  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:40.439983  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:40.831785  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:40.842014  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:40.939288  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:41.331693  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:41.341185  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:41.439465  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:41.832325  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:41.841415  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:41.939384  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:42.332421  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:42.342724  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:42.438399  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:42.641764  125487 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-lmr4m" in "kube-system" namespace has status "Ready":"False"
	I0131 14:11:42.831103  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:42.840276  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:42.938306  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:43.331284  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:43.340568  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:43.451455  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:43.832286  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:43.841845  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:43.939576  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:44.331466  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:44.340303  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:44.480026  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:44.641798  125487 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-lmr4m" in "kube-system" namespace has status "Ready":"False"
	I0131 14:11:44.873133  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:44.875354  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:45.055446  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:45.335001  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:45.340682  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:45.438333  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:45.831442  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:45.840768  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:45.941026  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:46.330975  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:46.340700  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:46.438894  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:46.700293  125487 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-lmr4m" in "kube-system" namespace has status "Ready":"False"
	I0131 14:11:46.831187  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:46.840073  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:46.937952  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:47.332605  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:47.341267  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:47.441980  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:47.831326  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:47.840675  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:47.938403  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:48.332226  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:48.341964  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:48.438346  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:48.831208  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:48.841575  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:48.939781  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:49.142798  125487 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-lmr4m" in "kube-system" namespace has status "Ready":"False"
	I0131 14:11:49.331572  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:49.340679  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:49.440839  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:49.831075  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:49.840937  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:49.938405  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:50.332091  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:50.340549  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:50.439558  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:50.831914  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:50.866960  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:50.938385  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:51.331934  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:51.340903  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:51.438209  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:51.641154  125487 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-lmr4m" in "kube-system" namespace has status "Ready":"False"
	I0131 14:11:51.831348  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:51.840904  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:51.939358  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:52.332119  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:52.341719  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:52.439853  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:52.832984  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:52.842095  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:52.939349  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:53.331393  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:53.341021  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:53.438693  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:53.641976  125487 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-lmr4m" in "kube-system" namespace has status "Ready":"False"
	I0131 14:11:53.832060  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:53.839739  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:53.938070  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:54.330667  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:54.341865  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:54.438798  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:54.641811  125487 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-lmr4m" in "kube-system" namespace has status "Ready":"True"
	I0131 14:11:54.641834  125487 pod_ready.go:81] duration metric: took 29.506487947s waiting for pod "nvidia-device-plugin-daemonset-lmr4m" in "kube-system" namespace to be "Ready" ...
	I0131 14:11:54.641843  125487 pod_ready.go:38] duration metric: took 41.419592825s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 14:11:54.641861  125487 api_server.go:52] waiting for apiserver process to appear ...
	I0131 14:11:54.641906  125487 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 14:11:54.658004  125487 api_server.go:72] duration metric: took 44.136299685s to wait for apiserver process to appear ...
	I0131 14:11:54.658036  125487 api_server.go:88] waiting for apiserver healthz status ...
	I0131 14:11:54.658059  125487 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0131 14:11:54.663491  125487 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0131 14:11:54.664767  125487 api_server.go:141] control plane version: v1.28.4
	I0131 14:11:54.664789  125487 api_server.go:131] duration metric: took 6.747154ms to wait for apiserver health ...
	I0131 14:11:54.664798  125487 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 14:11:54.675007  125487 system_pods.go:59] 19 kube-system pods found
	I0131 14:11:54.675038  125487 system_pods.go:61] "coredns-5dd5756b68-dhqkg" [4ee3da37-ee93-4adf-84b5-1a3536d0affd] Running
	I0131 14:11:54.675043  125487 system_pods.go:61] "csi-hostpath-attacher-0" [88bb39b3-bbd4-43e7-82ab-e15251e315b8] Running
	I0131 14:11:54.675048  125487 system_pods.go:61] "csi-hostpath-resizer-0" [60f6c692-f7d8-4bd5-a237-bae55df662c8] Running
	I0131 14:11:54.675056  125487 system_pods.go:61] "csi-hostpathplugin-kwkf2" [f15a28c1-2b0e-4ffb-97c2-1998273acef4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0131 14:11:54.675072  125487 system_pods.go:61] "etcd-addons-214491" [c18cf44a-a75f-4461-a983-3a75fd339fd7] Running
	I0131 14:11:54.675078  125487 system_pods.go:61] "kindnet-p9gwl" [efb988f3-3908-4431-a07e-99d45208a690] Running
	I0131 14:11:54.675082  125487 system_pods.go:61] "kube-apiserver-addons-214491" [8bd186fc-2746-445f-9c00-133f576943f6] Running
	I0131 14:11:54.675087  125487 system_pods.go:61] "kube-controller-manager-addons-214491" [76e0cfb3-278c-4699-9ff6-a78134d6209a] Running
	I0131 14:11:54.675091  125487 system_pods.go:61] "kube-ingress-dns-minikube" [9c551d17-79a7-4334-94f8-60c40742d004] Running
	I0131 14:11:54.675095  125487 system_pods.go:61] "kube-proxy-6sbxl" [d50a831b-a3d1-4061-8e3c-89413d6438dd] Running
	I0131 14:11:54.675099  125487 system_pods.go:61] "kube-scheduler-addons-214491" [7158bd24-bf9b-4f68-995e-8f49346ea88f] Running
	I0131 14:11:54.675105  125487 system_pods.go:61] "metrics-server-7c66d45ddc-5z5sm" [f00d08cb-8ef5-4fb1-9a0f-bc55ce02a581] Running
	I0131 14:11:54.675112  125487 system_pods.go:61] "nvidia-device-plugin-daemonset-lmr4m" [3c951f22-d962-4f13-929a-e7a2552f629c] Running
	I0131 14:11:54.675116  125487 system_pods.go:61] "registry-d9s2q" [2871b25a-9352-469f-8a23-944ab9a8e387] Running
	I0131 14:11:54.675123  125487 system_pods.go:61] "registry-proxy-q74f8" [df7b6b84-753f-4801-9602-60eb8519d1b6] Running
	I0131 14:11:54.675127  125487 system_pods.go:61] "snapshot-controller-58dbcc7b99-4rngn" [3cb8e201-d1cf-4d76-9062-94fad1c64649] Running
	I0131 14:11:54.675134  125487 system_pods.go:61] "snapshot-controller-58dbcc7b99-8xhng" [ac6a57e5-7da4-4d0e-86ce-7bf9f6c20fc2] Running
	I0131 14:11:54.675138  125487 system_pods.go:61] "storage-provisioner" [58e6a917-440c-442d-866f-9fb81149f70f] Running
	I0131 14:11:54.675144  125487 system_pods.go:61] "tiller-deploy-7b677967b9-fpdpc" [df50ab8f-7dde-4c15-ac50-a9d5d3ee508e] Running
	I0131 14:11:54.675152  125487 system_pods.go:74] duration metric: took 10.348002ms to wait for pod list to return data ...
	I0131 14:11:54.675162  125487 default_sa.go:34] waiting for default service account to be created ...
	I0131 14:11:54.677286  125487 default_sa.go:45] found service account: "default"
	I0131 14:11:54.677306  125487 default_sa.go:55] duration metric: took 2.138733ms for default service account to be created ...
	I0131 14:11:54.677313  125487 system_pods.go:116] waiting for k8s-apps to be running ...
	I0131 14:11:54.685019  125487 system_pods.go:86] 19 kube-system pods found
	I0131 14:11:54.685048  125487 system_pods.go:89] "coredns-5dd5756b68-dhqkg" [4ee3da37-ee93-4adf-84b5-1a3536d0affd] Running
	I0131 14:11:54.685054  125487 system_pods.go:89] "csi-hostpath-attacher-0" [88bb39b3-bbd4-43e7-82ab-e15251e315b8] Running
	I0131 14:11:54.685058  125487 system_pods.go:89] "csi-hostpath-resizer-0" [60f6c692-f7d8-4bd5-a237-bae55df662c8] Running
	I0131 14:11:54.685065  125487 system_pods.go:89] "csi-hostpathplugin-kwkf2" [f15a28c1-2b0e-4ffb-97c2-1998273acef4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0131 14:11:54.685070  125487 system_pods.go:89] "etcd-addons-214491" [c18cf44a-a75f-4461-a983-3a75fd339fd7] Running
	I0131 14:11:54.685077  125487 system_pods.go:89] "kindnet-p9gwl" [efb988f3-3908-4431-a07e-99d45208a690] Running
	I0131 14:11:54.685085  125487 system_pods.go:89] "kube-apiserver-addons-214491" [8bd186fc-2746-445f-9c00-133f576943f6] Running
	I0131 14:11:54.685089  125487 system_pods.go:89] "kube-controller-manager-addons-214491" [76e0cfb3-278c-4699-9ff6-a78134d6209a] Running
	I0131 14:11:54.685100  125487 system_pods.go:89] "kube-ingress-dns-minikube" [9c551d17-79a7-4334-94f8-60c40742d004] Running
	I0131 14:11:54.685105  125487 system_pods.go:89] "kube-proxy-6sbxl" [d50a831b-a3d1-4061-8e3c-89413d6438dd] Running
	I0131 14:11:54.685109  125487 system_pods.go:89] "kube-scheduler-addons-214491" [7158bd24-bf9b-4f68-995e-8f49346ea88f] Running
	I0131 14:11:54.685116  125487 system_pods.go:89] "metrics-server-7c66d45ddc-5z5sm" [f00d08cb-8ef5-4fb1-9a0f-bc55ce02a581] Running
	I0131 14:11:54.685121  125487 system_pods.go:89] "nvidia-device-plugin-daemonset-lmr4m" [3c951f22-d962-4f13-929a-e7a2552f629c] Running
	I0131 14:11:54.685126  125487 system_pods.go:89] "registry-d9s2q" [2871b25a-9352-469f-8a23-944ab9a8e387] Running
	I0131 14:11:54.685130  125487 system_pods.go:89] "registry-proxy-q74f8" [df7b6b84-753f-4801-9602-60eb8519d1b6] Running
	I0131 14:11:54.685135  125487 system_pods.go:89] "snapshot-controller-58dbcc7b99-4rngn" [3cb8e201-d1cf-4d76-9062-94fad1c64649] Running
	I0131 14:11:54.685139  125487 system_pods.go:89] "snapshot-controller-58dbcc7b99-8xhng" [ac6a57e5-7da4-4d0e-86ce-7bf9f6c20fc2] Running
	I0131 14:11:54.685143  125487 system_pods.go:89] "storage-provisioner" [58e6a917-440c-442d-866f-9fb81149f70f] Running
	I0131 14:11:54.685146  125487 system_pods.go:89] "tiller-deploy-7b677967b9-fpdpc" [df50ab8f-7dde-4c15-ac50-a9d5d3ee508e] Running
	I0131 14:11:54.685152  125487 system_pods.go:126] duration metric: took 7.833768ms to wait for k8s-apps to be running ...
	I0131 14:11:54.685159  125487 system_svc.go:44] waiting for kubelet service to be running ....
	I0131 14:11:54.685214  125487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 14:11:54.696577  125487 system_svc.go:56] duration metric: took 11.410467ms WaitForService to wait for kubelet.
	I0131 14:11:54.696598  125487 kubeadm.go:581] duration metric: took 44.174901213s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0131 14:11:54.696618  125487 node_conditions.go:102] verifying NodePressure condition ...
	I0131 14:11:54.699284  125487 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0131 14:11:54.699310  125487 node_conditions.go:123] node cpu capacity is 8
	I0131 14:11:54.699323  125487 node_conditions.go:105] duration metric: took 2.701422ms to run NodePressure ...
	I0131 14:11:54.699332  125487 start.go:228] waiting for startup goroutines ...
	I0131 14:11:54.831496  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:54.840276  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:54.938365  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:55.331861  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:55.341301  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:55.439986  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:55.832641  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:55.840481  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:55.938810  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:56.331076  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:56.340879  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:56.438956  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:56.831061  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:56.842210  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:56.938343  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:57.331854  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:57.342766  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:57.439382  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:57.832216  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:57.842069  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:57.938269  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:58.331640  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:58.340721  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:58.481705  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:58.832298  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:58.841891  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:58.938671  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:59.332401  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:59.342436  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:59.439024  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:11:59.831264  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:11:59.841828  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:11:59.939857  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:12:00.332232  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:12:00.341898  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:12:00.439564  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:12:00.832026  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:12:00.842410  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:12:00.938373  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:12:01.331104  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:12:01.340556  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:12:01.438916  125487 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 14:12:01.831662  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:12:01.841262  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:12:01.938945  125487 kapi.go:107] duration metric: took 42.007141412s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0131 14:12:02.332002  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 14:12:02.341332  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:12:02.835639  125487 kapi.go:107] duration metric: took 39.508461558s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0131 14:12:02.837932  125487 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-214491 cluster.
	I0131 14:12:02.839475  125487 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0131 14:12:02.840944  125487 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0131 14:12:02.842883  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:12:03.340650  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:12:03.841045  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:12:04.341161  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:12:04.843041  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:12:05.340225  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:12:05.841901  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:12:06.340700  125487 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 14:12:06.840313  125487 kapi.go:107] duration metric: took 45.006225231s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0131 14:12:06.842220  125487 out.go:177] * Enabled addons: nvidia-device-plugin, ingress-dns, storage-provisioner-rancher, storage-provisioner, inspektor-gadget, cloud-spanner, helm-tiller, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0131 14:12:06.843506  125487 addons.go:505] enable addons completed in 56.828654914s: enabled=[nvidia-device-plugin ingress-dns storage-provisioner-rancher storage-provisioner inspektor-gadget cloud-spanner helm-tiller metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0131 14:12:06.843557  125487 start.go:233] waiting for cluster config update ...
	I0131 14:12:06.843579  125487 start.go:242] writing updated cluster config ...
	I0131 14:12:06.843860  125487 ssh_runner.go:195] Run: rm -f paused
	I0131 14:12:06.895175  125487 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0131 14:12:06.897172  125487 out.go:177] * Done! kubectl is now configured to use "addons-214491" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	20dec4e4614d7       2b70e4aaac6b5       5 seconds ago        Running             nginx                                    0                   b13f4340cd1ae       nginx
	ff5683c1bfec2       98f6c3b32d565       14 seconds ago       Exited              helm-test                                0                   c29b3cc573346       helm-test
	98029f7d670dc       738351fd438f0       24 seconds ago       Running             csi-snapshotter                          0                   6bbbeaef7e618       csi-hostpathplugin-kwkf2
	b440280239fdf       931dbfd16f87c       25 seconds ago       Running             csi-provisioner                          0                   6bbbeaef7e618       csi-hostpathplugin-kwkf2
	e1a7670d4586d       e899260153aed       26 seconds ago       Running             liveness-probe                           0                   6bbbeaef7e618       csi-hostpathplugin-kwkf2
	f8bbdbf3c9286       e255e073c508c       26 seconds ago       Running             hostpath                                 0                   6bbbeaef7e618       csi-hostpathplugin-kwkf2
	b7b65d3658d6a       6d2a98b274382       27 seconds ago       Running             gcp-auth                                 0                   6f68dd99f8994       gcp-auth-d4c87556c-fdqtx
	2cf2b7fee92e2       311f90a3747fd       28 seconds ago       Running             controller                               0                   184155fe0f85c       ingress-nginx-controller-69cff4fd79-6l2mk
	a09fd8cb278b8       88ef14a257f42       33 seconds ago       Running             node-driver-registrar                    0                   6bbbeaef7e618       csi-hostpathplugin-kwkf2
	8e47ffaa61098       754854eab8c1c       34 seconds ago       Running             cloud-spanner-emulator                   0                   9be687c468e30       cloud-spanner-emulator-64c8c85f65-44b5j
	3dfcb37066e22       8cfc3f994a82b       36 seconds ago       Running             nvidia-device-plugin-ctr                 0                   379f8c333b784       nvidia-device-plugin-daemonset-lmr4m
	d563e8cea6cd1       1ebff0f9671bc       40 seconds ago       Exited              patch                                    0                   d0249dd47af3e       gcp-auth-certs-patch-m7mv6
	8b2dc0799c98c       1ebff0f9671bc       41 seconds ago       Exited              create                                   0                   9ff9e172f3fec       gcp-auth-certs-create-kxrtn
	866f317e0e8ff       19a639eda60f0       41 seconds ago       Running             csi-resizer                              0                   7b88ff114de0f       csi-hostpath-resizer-0
	09775a461131c       a1ed5895ba635       42 seconds ago       Running             csi-external-health-monitor-controller   0                   6bbbeaef7e618       csi-hostpathplugin-kwkf2
	c782fabfb6f86       59cbb42146a37       43 seconds ago       Running             csi-attacher                             0                   bcdfa8e527f98       csi-hostpath-attacher-0
	59991ecd7a71c       1ebff0f9671bc       43 seconds ago       Exited              patch                                    1                   8224220cf630e       ingress-nginx-admission-patch-8xvd6
	bb5d5599012d8       1ebff0f9671bc       44 seconds ago       Exited              create                                   0                   b4b46f313ee77       ingress-nginx-admission-create-rcgr8
	1eb30f1d5c4e6       aa61ee9c70bc4       47 seconds ago       Running             volume-snapshot-controller               0                   311e2fa269427       snapshot-controller-58dbcc7b99-4rngn
	22ceb40395172       aa61ee9c70bc4       47 seconds ago       Running             volume-snapshot-controller               0                   06b569d7fa873       snapshot-controller-58dbcc7b99-8xhng
	d457d8c6bcf4b       31de47c733c91       52 seconds ago       Running             yakd                                     0                   791ad26f8fafa       yakd-dashboard-9947fc6bf-tjtlr
	65e7dc3df70cb       e16d1e3a10667       59 seconds ago       Running             local-path-provisioner                   0                   ca740c0b5150d       local-path-provisioner-78b46b4d5c-vxxxf
	11c4ec48778f5       1499ed4fbd0aa       About a minute ago   Running             minikube-ingress-dns                     0                   08257abf5dd5c       kube-ingress-dns-minikube
	20082807b22ba       ead0a4a53df89       About a minute ago   Running             coredns                                  0                   ba86e99709d10       coredns-5dd5756b68-dhqkg
	027c3c119c2c0       6e38f40d628db       About a minute ago   Running             storage-provisioner                      0                   6566cadc9df3d       storage-provisioner
	fc5708a499819       c7d1297425461       About a minute ago   Running             kindnet-cni                              0                   0d267afdf0390       kindnet-p9gwl
	2ada3aafb372e       83f6cc407eed8       About a minute ago   Running             kube-proxy                               0                   db86f64fcab06       kube-proxy-6sbxl
	1795212d12e86       d058aa5ab969c       About a minute ago   Running             kube-controller-manager                  0                   b0ef8c3a18072       kube-controller-manager-addons-214491
	abb617918ca21       e3db313c6dbc0       About a minute ago   Running             kube-scheduler                           0                   99fa2a58612bb       kube-scheduler-addons-214491
	22e2442985946       73deb9a3f7025       About a minute ago   Running             etcd                                     0                   e5982d8b897ed       etcd-addons-214491
	0eb1d87425a8e       7fe0e6f37db33       About a minute ago   Running             kube-apiserver                           0                   f9ce0afcccc00       kube-apiserver-addons-214491
	
	
	==> containerd <==
	Jan 31 14:12:25 addons-214491 containerd[783]: time="2024-01-31T14:12:25.532195647Z" level=info msg="PullImage \"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\""
	Jan 31 14:12:25 addons-214491 containerd[783]: time="2024-01-31T14:12:25.534011487Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Jan 31 14:12:25 addons-214491 containerd[783]: time="2024-01-31T14:12:25.813995982Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Jan 31 14:12:26 addons-214491 containerd[783]: time="2024-01-31T14:12:26.563819342Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Jan 31 14:12:26 addons-214491 containerd[783]: time="2024-01-31T14:12:26.566379293Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Jan 31 14:12:26 addons-214491 containerd[783]: time="2024-01-31T14:12:26.568257455Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Jan 31 14:12:26 addons-214491 containerd[783]: time="2024-01-31T14:12:26.568724836Z" level=info msg="PullImage \"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\" returns image reference \"sha256:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824\""
	Jan 31 14:12:26 addons-214491 containerd[783]: time="2024-01-31T14:12:26.570698649Z" level=info msg="CreateContainer within sandbox \"3ff841a74152d483fd0dc29276ab3069cdd544cc0daaab6dfe5c219480f3e9aa\" for container &ContainerMetadata{Name:helper-pod,Attempt:0,}"
	Jan 31 14:12:26 addons-214491 containerd[783]: time="2024-01-31T14:12:26.616245556Z" level=info msg="CreateContainer within sandbox \"3ff841a74152d483fd0dc29276ab3069cdd544cc0daaab6dfe5c219480f3e9aa\" for &ContainerMetadata{Name:helper-pod,Attempt:0,} returns container id \"884b7e4b417699eabfd37cd0a8f6064205665f2bbac9e47fc438a94470964d77\""
	Jan 31 14:12:26 addons-214491 containerd[783]: time="2024-01-31T14:12:26.616961905Z" level=info msg="StartContainer for \"884b7e4b417699eabfd37cd0a8f6064205665f2bbac9e47fc438a94470964d77\""
	Jan 31 14:12:26 addons-214491 containerd[783]: time="2024-01-31T14:12:26.667113596Z" level=info msg="StartContainer for \"884b7e4b417699eabfd37cd0a8f6064205665f2bbac9e47fc438a94470964d77\" returns successfully"
	Jan 31 14:12:26 addons-214491 containerd[783]: time="2024-01-31T14:12:26.775017279Z" level=info msg="shim disconnected" id=884b7e4b417699eabfd37cd0a8f6064205665f2bbac9e47fc438a94470964d77
	Jan 31 14:12:26 addons-214491 containerd[783]: time="2024-01-31T14:12:26.775091696Z" level=warning msg="cleaning up after shim disconnected" id=884b7e4b417699eabfd37cd0a8f6064205665f2bbac9e47fc438a94470964d77 namespace=k8s.io
	Jan 31 14:12:26 addons-214491 containerd[783]: time="2024-01-31T14:12:26.775108473Z" level=info msg="cleaning up dead shim"
	Jan 31 14:12:26 addons-214491 containerd[783]: time="2024-01-31T14:12:26.783759242Z" level=warning msg="cleanup warnings time=\"2024-01-31T14:12:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=8352 runtime=io.containerd.runc.v2\n"
	Jan 31 14:12:28 addons-214491 containerd[783]: time="2024-01-31T14:12:28.742755197Z" level=info msg="StopPodSandbox for \"3ff841a74152d483fd0dc29276ab3069cdd544cc0daaab6dfe5c219480f3e9aa\""
	Jan 31 14:12:28 addons-214491 containerd[783]: time="2024-01-31T14:12:28.742853511Z" level=info msg="Container to stop \"884b7e4b417699eabfd37cd0a8f6064205665f2bbac9e47fc438a94470964d77\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Jan 31 14:12:28 addons-214491 containerd[783]: time="2024-01-31T14:12:28.773238286Z" level=info msg="shim disconnected" id=3ff841a74152d483fd0dc29276ab3069cdd544cc0daaab6dfe5c219480f3e9aa
	Jan 31 14:12:28 addons-214491 containerd[783]: time="2024-01-31T14:12:28.773531510Z" level=warning msg="cleaning up after shim disconnected" id=3ff841a74152d483fd0dc29276ab3069cdd544cc0daaab6dfe5c219480f3e9aa namespace=k8s.io
	Jan 31 14:12:28 addons-214491 containerd[783]: time="2024-01-31T14:12:28.773560823Z" level=info msg="cleaning up dead shim"
	Jan 31 14:12:28 addons-214491 containerd[783]: time="2024-01-31T14:12:28.782721072Z" level=warning msg="cleanup warnings time=\"2024-01-31T14:12:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=8406 runtime=io.containerd.runc.v2\n"
	Jan 31 14:12:28 addons-214491 containerd[783]: time="2024-01-31T14:12:28.826638769Z" level=info msg="TearDown network for sandbox \"3ff841a74152d483fd0dc29276ab3069cdd544cc0daaab6dfe5c219480f3e9aa\" successfully"
	Jan 31 14:12:28 addons-214491 containerd[783]: time="2024-01-31T14:12:28.826683245Z" level=info msg="StopPodSandbox for \"3ff841a74152d483fd0dc29276ab3069cdd544cc0daaab6dfe5c219480f3e9aa\" returns successfully"
	Jan 31 14:12:29 addons-214491 containerd[783]: time="2024-01-31T14:12:29.748160041Z" level=info msg="RemoveContainer for \"884b7e4b417699eabfd37cd0a8f6064205665f2bbac9e47fc438a94470964d77\""
	Jan 31 14:12:29 addons-214491 containerd[783]: time="2024-01-31T14:12:29.753458664Z" level=info msg="RemoveContainer for \"884b7e4b417699eabfd37cd0a8f6064205665f2bbac9e47fc438a94470964d77\" returns successfully"
	
	
	==> coredns [20082807b22bafba9d25817d55fbf255dd277544471bc4bc2728d4222a6b8b37] <==
	[INFO] 10.244.0.3:52200 - 47021 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000098574s
	[INFO] 10.244.0.3:33863 - 61967 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.006507s
	[INFO] 10.244.0.3:33863 - 60928 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.008150932s
	[INFO] 10.244.0.3:51781 - 3670 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.006176873s
	[INFO] 10.244.0.3:51781 - 26451 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.006255234s
	[INFO] 10.244.0.3:59764 - 9735 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.006394931s
	[INFO] 10.244.0.3:59764 - 29048 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.006565105s
	[INFO] 10.244.0.3:57305 - 14744 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000074065s
	[INFO] 10.244.0.3:57305 - 50837 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000129805s
	[INFO] 10.244.0.21:56913 - 2570 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000213681s
	[INFO] 10.244.0.21:42105 - 7099 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000353481s
	[INFO] 10.244.0.21:55828 - 52802 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000152557s
	[INFO] 10.244.0.21:34693 - 63415 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000128043s
	[INFO] 10.244.0.21:41318 - 10247 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000107131s
	[INFO] 10.244.0.21:44480 - 46445 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000125879s
	[INFO] 10.244.0.21:52879 - 19020 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.009336902s
	[INFO] 10.244.0.21:39852 - 22586 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.01415022s
	[INFO] 10.244.0.21:49406 - 52533 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.008342825s
	[INFO] 10.244.0.21:60202 - 12859 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.010133937s
	[INFO] 10.244.0.21:39803 - 8702 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.009125735s
	[INFO] 10.244.0.21:51045 - 48296 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.024088416s
	[INFO] 10.244.0.21:44937 - 60599 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001348958s
	[INFO] 10.244.0.21:49016 - 55787 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001447792s
	[INFO] 10.244.0.23:42892 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000177132s
	[INFO] 10.244.0.23:39712 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000171085s
	
	
	==> describe nodes <==
	Name:               addons-214491
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-214491
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=274d15c48919de599d1c531208ca35671bcbf218
	                    minikube.k8s.io/name=addons-214491
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_31T14_10_58_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-214491
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-214491"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jan 2024 14:10:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-214491
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jan 2024 14:12:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jan 2024 14:12:29 +0000   Wed, 31 Jan 2024 14:10:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jan 2024 14:12:29 +0000   Wed, 31 Jan 2024 14:10:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jan 2024 14:12:29 +0000   Wed, 31 Jan 2024 14:10:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jan 2024 14:12:29 +0000   Wed, 31 Jan 2024 14:10:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-214491
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	System Info:
	  Machine ID:                 1513117ec6364ff3ab682486b7669ee9
	  System UUID:                ad3e0fa0-48f4-451b-a763-e0e2854f4f71
	  Boot ID:                    59de63ea-ec4c-4e26-a911-a59699678b11
	  Kernel Version:             5.15.0-1049-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.26
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (21 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-64c8c85f65-44b5j      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  default                     nginx                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  gcp-auth                    gcp-auth-d4c87556c-fdqtx                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  ingress-nginx               ingress-nginx-controller-69cff4fd79-6l2mk    100m (1%!)(MISSING)     0 (0%!)(MISSING)      90Mi (0%!)(MISSING)        0 (0%!)(MISSING)         71s
	  kube-system                 coredns-5dd5756b68-dhqkg                     100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     80s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	  kube-system                 csi-hostpathplugin-kwkf2                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	  kube-system                 etcd-addons-214491                           100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         93s
	  kube-system                 kindnet-p9gwl                                100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      81s
	  kube-system                 kube-apiserver-addons-214491                 250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 kube-controller-manager-addons-214491        200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	  kube-system                 kube-proxy-6sbxl                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         81s
	  kube-system                 kube-scheduler-addons-214491                 100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 nvidia-device-plugin-daemonset-lmr4m         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kube-system                 snapshot-controller-58dbcc7b99-4rngn         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	  kube-system                 snapshot-controller-58dbcc7b99-8xhng         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         74s
	  local-path-storage          local-path-provisioner-78b46b4d5c-vxxxf      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         75s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-tjtlr               0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (0%!)(MISSING)       256Mi (0%!)(MISSING)     73s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%!)(MISSING)  100m (1%!)(MISSING)
	  memory             438Mi (1%!)(MISSING)  476Mi (1%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 77s                kube-proxy       
	  Normal  Starting                 99s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  99s (x8 over 99s)  kubelet          Node addons-214491 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    99s (x8 over 99s)  kubelet          Node addons-214491 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     99s (x7 over 99s)  kubelet          Node addons-214491 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  99s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 93s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  93s                kubelet          Node addons-214491 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    93s                kubelet          Node addons-214491 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     93s                kubelet          Node addons-214491 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             93s                kubelet          Node addons-214491 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  93s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                93s                kubelet          Node addons-214491 status is now: NodeReady
	  Normal  RegisteredNode           81s                node-controller  Node addons-214491 event: Registered Node addons-214491 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2e ce 7e e2 57 50 08 06
	[Jan31 13:47] IPv4: martian source 10.244.0.1 from 10.244.0.29, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 7d f5 31 e3 29 08 06
	[  +0.000158] IPv4: martian source 10.244.0.1 from 10.244.0.28, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 36 03 8f a6 69 94 08 06
	[ +19.874195] IPv4: martian source 10.244.0.1 from 10.244.0.30, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e2 8f 4a 31 49 10 08 06
	[Jan31 13:48] IPv4: martian source 10.244.0.1 from 10.244.0.31, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a f1 65 e4 70 47 08 06
	[ +25.959432] IPv4: martian source 10.244.0.1 from 10.244.0.34, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 7e 7f 9c 50 2d 37 08 06
	[Jan31 13:50] IPv4: martian source 10.244.0.1 from 10.244.0.40, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ba be d3 f7 c8 ec 08 06
	[Jan31 13:52] IPv4: martian source 10.244.0.1 from 10.244.0.41, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2a 86 8e 5c f6 ff 08 06
	[ +15.266675] IPv4: martian source 10.244.0.1 from 10.244.0.42, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e6 d0 4b b7 1d f3 08 06
	[Jan31 13:53] IPv4: martian source 10.244.0.1 from 10.244.0.43, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9a 68 39 f5 41 71 08 06
	[Jan31 13:54] IPv4: martian source 10.244.0.1 from 10.244.0.44, on dev eth0
	[  +0.000013] ll header: 00000000: ff ff ff ff ff ff f2 c9 46 5b 03 bb 08 06
	[ +32.842842] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff be 8d 2c 5f db d3 08 06
	[Jan31 13:55] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fe 96 a9 b7 6c 7d 08 06
	
	
	==> etcd [22e244298594624880ed12b9a118863444f627bae00137f11d2c1b5a6e759ff8] <==
	{"level":"info","ts":"2024-01-31T14:10:52.126361Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-31T14:10:52.126498Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-01-31T14:10:52.126517Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-01-31T14:10:52.214956Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-01-31T14:10:52.215086Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-01-31T14:10:52.215142Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-01-31T14:10:52.215229Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-01-31T14:10:52.215261Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-01-31T14:10:52.215307Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-01-31T14:10:52.21535Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-01-31T14:10:52.216289Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-214491 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-31T14:10:52.216328Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-31T14:10:52.216384Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-31T14:10:52.216599Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-31T14:10:52.216669Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-31T14:10:52.216355Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-31T14:10:52.217056Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-31T14:10:52.217237Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-31T14:10:52.217305Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-31T14:10:52.217775Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-01-31T14:10:52.218017Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-31T14:11:43.617313Z","caller":"traceutil/trace.go:171","msg":"trace[1345552921] transaction","detail":"{read_only:false; response_revision:992; number_of_response:1; }","duration":"155.011457ms","start":"2024-01-31T14:11:43.462264Z","end":"2024-01-31T14:11:43.617276Z","steps":["trace[1345552921] 'process raft request'  (duration: 110.796561ms)","trace[1345552921] 'compare'  (duration: 44.063544ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-31T14:11:45.052966Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.828794ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13482"}
	{"level":"info","ts":"2024-01-31T14:11:45.053091Z","caller":"traceutil/trace.go:171","msg":"trace[30449622] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:997; }","duration":"116.972323ms","start":"2024-01-31T14:11:44.936095Z","end":"2024-01-31T14:11:45.053067Z","steps":["trace[30449622] 'range keys from in-memory index tree'  (duration: 116.690517ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-31T14:12:15.613985Z","caller":"traceutil/trace.go:171","msg":"trace[1935600369] transaction","detail":"{read_only:false; response_revision:1230; number_of_response:1; }","duration":"106.973517ms","start":"2024-01-31T14:12:15.506981Z","end":"2024-01-31T14:12:15.613954Z","steps":["trace[1935600369] 'process raft request'  (duration: 106.787826ms)"],"step_count":1}
	
	
	==> gcp-auth [b7b65d3658d6ae0158e48347ab670d1af878312fa5c528eb8578344af056a040] <==
	2024/01/31 14:12:02 GCP Auth Webhook started!
	2024/01/31 14:12:13 Ready to marshal response ...
	2024/01/31 14:12:13 Ready to write response ...
	2024/01/31 14:12:18 Ready to marshal response ...
	2024/01/31 14:12:18 Ready to write response ...
	2024/01/31 14:12:22 Ready to marshal response ...
	2024/01/31 14:12:22 Ready to write response ...
	2024/01/31 14:12:25 Ready to marshal response ...
	2024/01/31 14:12:25 Ready to write response ...
	2024/01/31 14:12:25 Ready to marshal response ...
	2024/01/31 14:12:25 Ready to write response ...
	
	
	==> kernel <==
	 14:12:30 up 18:55,  0 users,  load average: 1.35, 0.87, 0.67
	Linux addons-214491 5.15.0-1049-gcp #57~20.04.1-Ubuntu SMP Wed Jan 17 16:04:23 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [fc5708a4998190033321ebad158cf42da467aef8369c4dc1afdc709839f7ae5f] <==
	I0131 14:11:11.708869       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0131 14:11:11.708952       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0131 14:11:11.709113       1 main.go:116] setting mtu 1500 for CNI 
	I0131 14:11:11.709124       1 main.go:146] kindnetd IP family: "ipv4"
	I0131 14:11:11.709147       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0131 14:11:12.104434       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0131 14:11:12.104842       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0131 14:11:13.111729       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0131 14:11:15.226704       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0131 14:11:15.226743       1 main.go:227] handling current node
	I0131 14:11:25.324928       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0131 14:11:25.324956       1 main.go:227] handling current node
	I0131 14:11:35.339072       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0131 14:11:35.339177       1 main.go:227] handling current node
	I0131 14:11:45.342907       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0131 14:11:45.342934       1 main.go:227] handling current node
	I0131 14:11:55.353381       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0131 14:11:55.353421       1 main.go:227] handling current node
	I0131 14:12:05.365601       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0131 14:12:05.365631       1 main.go:227] handling current node
	I0131 14:12:15.370353       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0131 14:12:15.370384       1 main.go:227] handling current node
	I0131 14:12:25.374071       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0131 14:12:25.374100       1 main.go:227] handling current node
	
	
	==> kube-apiserver [0eb1d87425a8ea62511f1efd17c5f5a4e11c3559c05612f9e10cdfa8c233b54a] <==
	W0131 14:11:35.352570       1 handler_proxy.go:93] no RequestInfo found in the context
	E0131 14:11:35.352603       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.208.117:443/apis/metrics.k8s.io/v1beta1: Get "https://10.110.208.117:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.110.208.117:443: connect: connection refused
	E0131 14:11:35.352727       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0131 14:11:35.406465       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0131 14:11:36.355277       1 handler_proxy.go:93] no RequestInfo found in the context
	W0131 14:11:36.355292       1 handler_proxy.go:93] no RequestInfo found in the context
	E0131 14:11:36.355313       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0131 14:11:36.355322       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0131 14:11:36.355391       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0131 14:11:36.356353       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0131 14:11:40.359819       1 handler_proxy.go:93] no RequestInfo found in the context
	E0131 14:11:40.359887       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0131 14:11:40.360079       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.208.117:443/apis/metrics.k8s.io/v1beta1: Get "https://10.110.208.117:443/apis/metrics.k8s.io/v1beta1": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0131 14:11:40.365369       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0131 14:11:40.419543       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0131 14:11:54.162571       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0131 14:12:19.418293       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0131 14:12:19.424456       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0131 14:12:20.434755       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0131 14:12:20.750776       1 watch.go:287] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoderWithAllocator{writer:responsewriter.outerWithCloseNotifyAndFlush{UserProvidedDecorator:(*metrics.ResponseWriterDelegator)(0xc00a16db60), InnerCloseNotifierFlusher:struct { httpsnoop.Unwrapper; http.ResponseWriter; http.Flusher; http.CloseNotifier; http.Pusher }{Unwrapper:(*httpsnoop.rw)(0xc008338910), ResponseWriter:(*httpsnoop.rw)(0xc008338910), Flusher:(*httpsnoop.rw)(0xc008338910), CloseNotifier:(*httpsnoop.rw)(0xc008338910), Pusher:(*httpsnoop.rw)(0xc008338910)}}, encoder:(*versioning.codec)(0xc00a4ba280), memAllocator:(*runtime.Allocator)(0xc00d24d410)})
	I0131 14:12:21.956222       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0131 14:12:22.149358       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.206.180"}
	
	
	==> kube-controller-manager [1795212d12e8678f8bfce160f7b6fabd45d51e877117df973fd7994e1f1a1ddd] <==
	I0131 14:12:02.541816       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-d4c87556c" duration="7.636236ms"
	I0131 14:12:02.541961       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-d4c87556c" duration="85.677µs"
	I0131 14:12:07.092247       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0131 14:12:09.057404       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0131 14:12:11.940495       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="8.808827ms"
	I0131 14:12:11.941385       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="95.766µs"
	I0131 14:12:13.493964       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-7c66d45ddc" duration="7.021µs"
	I0131 14:12:18.364208       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/tiller-deploy-7b677967b9" duration="9.343µs"
	I0131 14:12:18.406370       1 event.go:307] "Event occurred" object="kube-system/tiller-deploy" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToUpdateEndpoint" message="Failed to update endpoint kube-system/tiller-deploy: Operation cannot be fulfilled on endpoints \"tiller-deploy\": StorageError: invalid object, Code: 4, Key: /registry/services/endpoints/kube-system/tiller-deploy, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 05bdba54-4641-4bea-afef-e91530a56dea, UID in object meta: "
	E0131 14:12:20.436413       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	I0131 14:12:21.015843       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0131 14:12:21.017198       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0131 14:12:21.036477       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0131 14:12:21.037571       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0131 14:12:21.459311       1 replica_set.go:676] "Finished syncing" kind="ReplicationController" key="kube-system/registry" duration="15.058µs"
	W0131 14:12:21.703557       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0131 14:12:21.703634       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0131 14:12:24.058528       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	W0131 14:12:24.759469       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0131 14:12:24.759507       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0131 14:12:24.979768       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
	I0131 14:12:25.111909       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0131 14:12:29.712225       1 namespace_controller.go:182] "Namespace has been deleted" namespace="gadget"
	W0131 14:12:30.323855       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0131 14:12:30.323889       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [2ada3aafb372ef92f7ed3b75d17e6990c046d6ef832cafc13b03c52e09d1e3f7] <==
	I0131 14:11:11.805583       1 server_others.go:69] "Using iptables proxy"
	I0131 14:11:11.917858       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0131 14:11:12.211978       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0131 14:11:12.220459       1 server_others.go:152] "Using iptables Proxier"
	I0131 14:11:12.220525       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0131 14:11:12.220536       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0131 14:11:12.220578       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0131 14:11:12.220822       1 server.go:846] "Version info" version="v1.28.4"
	I0131 14:11:12.220840       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0131 14:11:12.222554       1 config.go:315] "Starting node config controller"
	I0131 14:11:12.222576       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0131 14:11:12.223048       1 config.go:97] "Starting endpoint slice config controller"
	I0131 14:11:12.223060       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0131 14:11:12.223117       1 config.go:188] "Starting service config controller"
	I0131 14:11:12.223125       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0131 14:11:12.322684       1 shared_informer.go:318] Caches are synced for node config
	I0131 14:11:12.323259       1 shared_informer.go:318] Caches are synced for service config
	I0131 14:11:12.323329       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [abb617918ca21e0b96e4ea9bf1ce7286aeef594af7b565eca149885aa576b8ba] <==
	W0131 14:10:54.325580       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0131 14:10:54.325598       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0131 14:10:54.325619       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0131 14:10:54.325622       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0131 14:10:54.325416       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0131 14:10:54.325725       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0131 14:10:54.325491       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0131 14:10:54.325760       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0131 14:10:54.325591       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0131 14:10:54.325791       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0131 14:10:55.160691       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0131 14:10:55.160742       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0131 14:10:55.172176       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0131 14:10:55.172219       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0131 14:10:55.182558       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0131 14:10:55.182598       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0131 14:10:55.202075       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0131 14:10:55.202113       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0131 14:10:55.329504       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0131 14:10:55.329549       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0131 14:10:55.335904       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0131 14:10:55.335946       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0131 14:10:55.585561       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0131 14:10:55.585596       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0131 14:10:57.521696       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 31 14:12:25 addons-214491 kubelet[1508]: I0131 14:12:25.155819    1508 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/3b0d3c22-62d4-414d-ae65-43aa549800af-data\") pod \"helper-pod-create-pvc-b6ca553a-3812-43ca-8a9b-b1a71b4e3891\" (UID: \"3b0d3c22-62d4-414d-ae65-43aa549800af\") " pod="local-path-storage/helper-pod-create-pvc-b6ca553a-3812-43ca-8a9b-b1a71b4e3891"
	Jan 31 14:12:25 addons-214491 kubelet[1508]: I0131 14:12:25.155858    1508 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/3b0d3c22-62d4-414d-ae65-43aa549800af-gcp-creds\") pod \"helper-pod-create-pvc-b6ca553a-3812-43ca-8a9b-b1a71b4e3891\" (UID: \"3b0d3c22-62d4-414d-ae65-43aa549800af\") " pod="local-path-storage/helper-pod-create-pvc-b6ca553a-3812-43ca-8a9b-b1a71b4e3891"
	Jan 31 14:12:25 addons-214491 kubelet[1508]: I0131 14:12:25.156114    1508 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/3b0d3c22-62d4-414d-ae65-43aa549800af-script\") pod \"helper-pod-create-pvc-b6ca553a-3812-43ca-8a9b-b1a71b4e3891\" (UID: \"3b0d3c22-62d4-414d-ae65-43aa549800af\") " pod="local-path-storage/helper-pod-create-pvc-b6ca553a-3812-43ca-8a9b-b1a71b4e3891"
	Jan 31 14:12:25 addons-214491 kubelet[1508]: I0131 14:12:25.740829    1508 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx" podStartSLOduration=1.51822477 podCreationTimestamp="2024-01-31 14:12:22 +0000 UTC" firstStartedPulling="2024-01-31 14:12:22.491202258 +0000 UTC m=+85.205105030" lastFinishedPulling="2024-01-31 14:12:24.713742803 +0000 UTC m=+87.427645567" observedRunningTime="2024-01-31 14:12:25.74050183 +0000 UTC m=+88.454404606" watchObservedRunningTime="2024-01-31 14:12:25.740765307 +0000 UTC m=+88.454668083"
	Jan 31 14:12:26 addons-214491 kubelet[1508]: I0131 14:12:26.755831    1508 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="local-path-storage/helper-pod-create-pvc-b6ca553a-3812-43ca-8a9b-b1a71b4e3891" podStartSLOduration=0.71854612 podCreationTimestamp="2024-01-31 14:12:25 +0000 UTC" firstStartedPulling="2024-01-31 14:12:25.531831618 +0000 UTC m=+88.245734376" lastFinishedPulling="2024-01-31 14:12:26.569075702 +0000 UTC m=+89.282978470" observedRunningTime="2024-01-31 14:12:26.754745407 +0000 UTC m=+89.468648182" watchObservedRunningTime="2024-01-31 14:12:26.755790214 +0000 UTC m=+89.469692990"
	Jan 31 14:12:28 addons-214491 kubelet[1508]: I0131 14:12:28.880275    1508 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/3b0d3c22-62d4-414d-ae65-43aa549800af-data\") pod \"3b0d3c22-62d4-414d-ae65-43aa549800af\" (UID: \"3b0d3c22-62d4-414d-ae65-43aa549800af\") "
	Jan 31 14:12:28 addons-214491 kubelet[1508]: I0131 14:12:28.880366    1508 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhvhq\" (UniqueName: \"kubernetes.io/projected/3b0d3c22-62d4-414d-ae65-43aa549800af-kube-api-access-zhvhq\") pod \"3b0d3c22-62d4-414d-ae65-43aa549800af\" (UID: \"3b0d3c22-62d4-414d-ae65-43aa549800af\") "
	Jan 31 14:12:28 addons-214491 kubelet[1508]: I0131 14:12:28.880416    1508 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/3b0d3c22-62d4-414d-ae65-43aa549800af-script\") pod \"3b0d3c22-62d4-414d-ae65-43aa549800af\" (UID: \"3b0d3c22-62d4-414d-ae65-43aa549800af\") "
	Jan 31 14:12:28 addons-214491 kubelet[1508]: I0131 14:12:28.880446    1508 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/3b0d3c22-62d4-414d-ae65-43aa549800af-gcp-creds\") pod \"3b0d3c22-62d4-414d-ae65-43aa549800af\" (UID: \"3b0d3c22-62d4-414d-ae65-43aa549800af\") "
	Jan 31 14:12:28 addons-214491 kubelet[1508]: I0131 14:12:28.880471    1508 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b0d3c22-62d4-414d-ae65-43aa549800af-data" (OuterVolumeSpecName: "data") pod "3b0d3c22-62d4-414d-ae65-43aa549800af" (UID: "3b0d3c22-62d4-414d-ae65-43aa549800af"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Jan 31 14:12:28 addons-214491 kubelet[1508]: I0131 14:12:28.880537    1508 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b0d3c22-62d4-414d-ae65-43aa549800af-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "3b0d3c22-62d4-414d-ae65-43aa549800af" (UID: "3b0d3c22-62d4-414d-ae65-43aa549800af"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Jan 31 14:12:28 addons-214491 kubelet[1508]: I0131 14:12:28.880653    1508 reconciler_common.go:300] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/3b0d3c22-62d4-414d-ae65-43aa549800af-gcp-creds\") on node \"addons-214491\" DevicePath \"\""
	Jan 31 14:12:28 addons-214491 kubelet[1508]: I0131 14:12:28.880682    1508 reconciler_common.go:300] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/3b0d3c22-62d4-414d-ae65-43aa549800af-data\") on node \"addons-214491\" DevicePath \"\""
	Jan 31 14:12:28 addons-214491 kubelet[1508]: I0131 14:12:28.880985    1508 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b0d3c22-62d4-414d-ae65-43aa549800af-script" (OuterVolumeSpecName: "script") pod "3b0d3c22-62d4-414d-ae65-43aa549800af" (UID: "3b0d3c22-62d4-414d-ae65-43aa549800af"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Jan 31 14:12:28 addons-214491 kubelet[1508]: I0131 14:12:28.882885    1508 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b0d3c22-62d4-414d-ae65-43aa549800af-kube-api-access-zhvhq" (OuterVolumeSpecName: "kube-api-access-zhvhq") pod "3b0d3c22-62d4-414d-ae65-43aa549800af" (UID: "3b0d3c22-62d4-414d-ae65-43aa549800af"). InnerVolumeSpecName "kube-api-access-zhvhq". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 31 14:12:28 addons-214491 kubelet[1508]: I0131 14:12:28.981676    1508 reconciler_common.go:300] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/3b0d3c22-62d4-414d-ae65-43aa549800af-script\") on node \"addons-214491\" DevicePath \"\""
	Jan 31 14:12:28 addons-214491 kubelet[1508]: I0131 14:12:28.981720    1508 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-zhvhq\" (UniqueName: \"kubernetes.io/projected/3b0d3c22-62d4-414d-ae65-43aa549800af-kube-api-access-zhvhq\") on node \"addons-214491\" DevicePath \"\""
	Jan 31 14:12:29 addons-214491 kubelet[1508]: I0131 14:12:29.517800    1508 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="3b0d3c22-62d4-414d-ae65-43aa549800af" path="/var/lib/kubelet/pods/3b0d3c22-62d4-414d-ae65-43aa549800af/volumes"
	Jan 31 14:12:29 addons-214491 kubelet[1508]: I0131 14:12:29.746874    1508 scope.go:117] "RemoveContainer" containerID="884b7e4b417699eabfd37cd0a8f6064205665f2bbac9e47fc438a94470964d77"
	Jan 31 14:12:30 addons-214491 kubelet[1508]: I0131 14:12:30.116907    1508 topology_manager.go:215] "Topology Admit Handler" podUID="6c549690-d1fd-44de-8f12-06862bd4372a" podNamespace="default" podName="test-local-path"
	Jan 31 14:12:30 addons-214491 kubelet[1508]: E0131 14:12:30.117029    1508 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3b0d3c22-62d4-414d-ae65-43aa549800af" containerName="helper-pod"
	Jan 31 14:12:30 addons-214491 kubelet[1508]: I0131 14:12:30.117095    1508 memory_manager.go:346] "RemoveStaleState removing state" podUID="3b0d3c22-62d4-414d-ae65-43aa549800af" containerName="helper-pod"
	Jan 31 14:12:30 addons-214491 kubelet[1508]: I0131 14:12:30.192756    1508 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/6c549690-d1fd-44de-8f12-06862bd4372a-gcp-creds\") pod \"test-local-path\" (UID: \"6c549690-d1fd-44de-8f12-06862bd4372a\") " pod="default/test-local-path"
	Jan 31 14:12:30 addons-214491 kubelet[1508]: I0131 14:12:30.192817    1508 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b6ca553a-3812-43ca-8a9b-b1a71b4e3891\" (UniqueName: \"kubernetes.io/host-path/6c549690-d1fd-44de-8f12-06862bd4372a-pvc-b6ca553a-3812-43ca-8a9b-b1a71b4e3891\") pod \"test-local-path\" (UID: \"6c549690-d1fd-44de-8f12-06862bd4372a\") " pod="default/test-local-path"
	Jan 31 14:12:30 addons-214491 kubelet[1508]: I0131 14:12:30.192844    1508 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqljk\" (UniqueName: \"kubernetes.io/projected/6c549690-d1fd-44de-8f12-06862bd4372a-kube-api-access-pqljk\") pod \"test-local-path\" (UID: \"6c549690-d1fd-44de-8f12-06862bd4372a\") " pod="default/test-local-path"
	
	
	==> storage-provisioner [027c3c119c2c083619acb12c682b3f1139a5133921532a56a1638c22742a6fd1] <==
	I0131 14:11:18.929217       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0131 14:11:19.013761       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0131 14:11:19.013835       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0131 14:11:19.026742       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0131 14:11:19.102007       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-214491_f2f0b446-e24a-464c-acd3-36314cee216c!
	I0131 14:11:19.102473       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"926895b8-0fd9-449e-8373-bf4c601c180d", APIVersion:"v1", ResourceVersion:"696", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-214491_f2f0b446-e24a-464c-acd3-36314cee216c became leader
	I0131 14:11:19.202526       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-214491_f2f0b446-e24a-464c-acd3-36314cee216c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-214491 -n addons-214491
helpers_test.go:261: (dbg) Run:  kubectl --context addons-214491 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: test-local-path ingress-nginx-admission-create-rcgr8 ingress-nginx-admission-patch-8xvd6
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/NvidiaDevicePlugin]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-214491 describe pod test-local-path ingress-nginx-admission-create-rcgr8 ingress-nginx-admission-patch-8xvd6
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-214491 describe pod test-local-path ingress-nginx-admission-create-rcgr8 ingress-nginx-admission-patch-8xvd6: exit status 1 (71.382246ms)

                                                
                                                
-- stdout --
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-214491/192.168.49.2
	Start Time:       Wed, 31 Jan 2024 14:12:30 +0000
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Container ID:  
	    Image:         busybox:stable
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pqljk (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-pqljk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  1s    default-scheduler  Successfully assigned default/test-local-path to addons-214491
	  Normal  Pulling    1s    kubelet            Pulling image "busybox:stable"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-rcgr8" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-8xvd6" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-214491 describe pod test-local-path ingress-nginx-admission-create-rcgr8 ingress-nginx-admission-patch-8xvd6: exit status 1
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (7.54s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.96s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-259653 /tmp/TestFunctionalserialCacheCmdcacheadd_local419424607/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 cache add minikube-local-cache-test:functional-259653
functional_test.go:1085: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-259653 cache add minikube-local-cache-test:functional-259653: exit status 10 (549.993352ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! "minikube cache" will be deprecated in upcoming versions, please switch to "minikube image load"
	X Exiting due to MK_CACHE_LOAD: Failed to cache and load images: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18007-117277/.minikube/cache/images/amd64/minikube-local-cache-test_functional-259653": write: unable to calculate manifest: blob sha256:ae4f2fbda766007d98ca0e7dfbd24cf21fc881a9deddd2607575138bd03902a4 not found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_cache_dd2791559a7ff632222ac315827a87590d196feb_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1087: failed to 'cache add' local image "minikube-local-cache-test:functional-259653". args "out/minikube-linux-amd64 -p functional-259653 cache add minikube-local-cache-test:functional-259653" err exit status 10
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 cache delete minikube-local-cache-test:functional-259653
functional_test.go:1090: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-259653 cache delete minikube-local-cache-test:functional-259653: exit status 30 (84.788677ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to HOST_DEL_CACHE: Failed to delete images: remove /home/jenkins/minikube-integration/18007-117277/.minikube/cache/images/amd64/minikube-local-cache-test_functional-259653: no such file or directory
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_cache_3ab86d86c83b8b28f09fbf50a15ec3b8d7c0686a_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1092: failed to 'cache delete' local image "minikube-local-cache-test:functional-259653". args "out/minikube-linux-amd64 -p functional-259653 cache delete minikube-local-cache-test:functional-259653" err exit status 30
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-259653
--- FAIL: TestFunctional/serial/CacheCmd/cache/add_local (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 image load --daemon gcr.io/google-containers/addon-resizer:functional-259653 --alsologtostderr
functional_test.go:354: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-259653 image load --daemon gcr.io/google-containers/addon-resizer:functional-259653 --alsologtostderr: exit status 80 (958.246421ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0131 14:17:42.519430  161296 out.go:296] Setting OutFile to fd 1 ...
	I0131 14:17:42.519661  161296 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 14:17:42.519676  161296 out.go:309] Setting ErrFile to fd 2...
	I0131 14:17:42.519683  161296 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 14:17:42.520020  161296 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-117277/.minikube/bin
	I0131 14:17:42.520861  161296 config.go:182] Loaded profile config "functional-259653": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0131 14:17:42.520980  161296 cache.go:107] acquiring lock: {Name:mk5f3a97d3304748be111ef6acab78c5c29dc8f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0131 14:17:42.521270  161296 image.go:134] retrieving image: gcr.io/google-containers/addon-resizer:functional-259653
	I0131 14:17:42.523685  161296 image.go:173] found gcr.io/google-containers/addon-resizer:functional-259653 locally: &{ref:{Repository:{Registry:{insecure:false registry:gcr.io} repository:google-containers/addon-resizer} tag:functional-259653 original:gcr.io/google-containers/addon-resizer:functional-259653} opener:0xc0005687e0 tarballImage:<nil> computed:false id:0xc0009f60a0 configFile:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I0131 14:17:42.523725  161296 cache.go:162] opening:  /home/jenkins/minikube-integration/18007-117277/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-259653
	I0131 14:17:43.368449  161296 cache.go:96] cache image "gcr.io/google-containers/addon-resizer:functional-259653" -> "/home/jenkins/minikube-integration/18007-117277/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-259653" took 847.493102ms
	I0131 14:17:43.372143  161296 out.go:177] 
	W0131 14:17:43.373621  161296 out.go:239] X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18007-117277/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-259653": write: unable to calculate manifest: blob sha256:df65ec24e31e9052f40143a6c297f81013842ab5813fa9c8d8da20a43938ad9e not found
	X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18007-117277/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-259653": write: unable to calculate manifest: blob sha256:df65ec24e31e9052f40143a6c297f81013842ab5813fa9c8d8da20a43938ad9e not found
	W0131 14:17:43.373642  161296 out.go:239] * 
	* 
	W0131 14:17:43.383349  161296 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0131 14:17:43.384975  161296 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:356: loading image into minikube from daemon: exit status 80

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0131 14:17:42.519430  161296 out.go:296] Setting OutFile to fd 1 ...
	I0131 14:17:42.519661  161296 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 14:17:42.519676  161296 out.go:309] Setting ErrFile to fd 2...
	I0131 14:17:42.519683  161296 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 14:17:42.520020  161296 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-117277/.minikube/bin
	I0131 14:17:42.520861  161296 config.go:182] Loaded profile config "functional-259653": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0131 14:17:42.520980  161296 cache.go:107] acquiring lock: {Name:mk5f3a97d3304748be111ef6acab78c5c29dc8f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0131 14:17:42.521270  161296 image.go:134] retrieving image: gcr.io/google-containers/addon-resizer:functional-259653
	I0131 14:17:42.523685  161296 image.go:173] found gcr.io/google-containers/addon-resizer:functional-259653 locally: &{ref:{Repository:{Registry:{insecure:false registry:gcr.io} repository:google-containers/addon-resizer} tag:functional-259653 original:gcr.io/google-containers/addon-resizer:functional-259653} opener:0xc0005687e0 tarballImage:<nil> computed:false id:0xc0009f60a0 configFile:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I0131 14:17:42.523725  161296 cache.go:162] opening:  /home/jenkins/minikube-integration/18007-117277/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-259653
	I0131 14:17:43.368449  161296 cache.go:96] cache image "gcr.io/google-containers/addon-resizer:functional-259653" -> "/home/jenkins/minikube-integration/18007-117277/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-259653" took 847.493102ms
	I0131 14:17:43.372143  161296 out.go:177] 
	W0131 14:17:43.373621  161296 out.go:239] X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18007-117277/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-259653": write: unable to calculate manifest: blob sha256:df65ec24e31e9052f40143a6c297f81013842ab5813fa9c8d8da20a43938ad9e not found
	X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18007-117277/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-259653": write: unable to calculate manifest: blob sha256:df65ec24e31e9052f40143a6c297f81013842ab5813fa9c8d8da20a43938ad9e not found
	W0131 14:17:43.373642  161296 out.go:239] * 
	* 
	W0131 14:17:43.383349  161296 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0131 14:17:43.384975  161296 out.go:177] 

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 image load --daemon gcr.io/google-containers/addon-resizer:functional-259653 --alsologtostderr
functional_test.go:364: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-259653 image load --daemon gcr.io/google-containers/addon-resizer:functional-259653 --alsologtostderr: exit status 80 (887.447563ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0131 14:17:43.469610  161863 out.go:296] Setting OutFile to fd 1 ...
	I0131 14:17:43.469910  161863 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 14:17:43.469922  161863 out.go:309] Setting ErrFile to fd 2...
	I0131 14:17:43.469927  161863 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 14:17:43.470141  161863 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-117277/.minikube/bin
	I0131 14:17:43.470787  161863 config.go:182] Loaded profile config "functional-259653": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0131 14:17:43.470874  161863 cache.go:107] acquiring lock: {Name:mk5f3a97d3304748be111ef6acab78c5c29dc8f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0131 14:17:43.470998  161863 image.go:134] retrieving image: gcr.io/google-containers/addon-resizer:functional-259653
	I0131 14:17:43.472664  161863 image.go:173] found gcr.io/google-containers/addon-resizer:functional-259653 locally: &{ref:{Repository:{Registry:{insecure:false registry:gcr.io} repository:google-containers/addon-resizer} tag:functional-259653 original:gcr.io/google-containers/addon-resizer:functional-259653} opener:0xc00053a000 tarballImage:<nil> computed:false id:0xc0002aa0a0 configFile:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I0131 14:17:43.472706  161863 cache.go:162] opening:  /home/jenkins/minikube-integration/18007-117277/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-259653
	I0131 14:17:44.243817  161863 cache.go:96] cache image "gcr.io/google-containers/addon-resizer:functional-259653" -> "/home/jenkins/minikube-integration/18007-117277/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-259653" took 772.952234ms
	I0131 14:17:44.246345  161863 out.go:177] 
	W0131 14:17:44.247781  161863 out.go:239] X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18007-117277/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-259653": write: unable to calculate manifest: blob sha256:df65ec24e31e9052f40143a6c297f81013842ab5813fa9c8d8da20a43938ad9e not found
	X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18007-117277/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-259653": write: unable to calculate manifest: blob sha256:df65ec24e31e9052f40143a6c297f81013842ab5813fa9c8d8da20a43938ad9e not found
	W0131 14:17:44.247803  161863 out.go:239] * 
	* 
	W0131 14:17:44.261961  161863 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0131 14:17:44.263896  161863 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:366: loading image into minikube from daemon: exit status 80

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0131 14:17:43.469610  161863 out.go:296] Setting OutFile to fd 1 ...
	I0131 14:17:43.469910  161863 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 14:17:43.469922  161863 out.go:309] Setting ErrFile to fd 2...
	I0131 14:17:43.469927  161863 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 14:17:43.470141  161863 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-117277/.minikube/bin
	I0131 14:17:43.470787  161863 config.go:182] Loaded profile config "functional-259653": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0131 14:17:43.470874  161863 cache.go:107] acquiring lock: {Name:mk5f3a97d3304748be111ef6acab78c5c29dc8f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0131 14:17:43.470998  161863 image.go:134] retrieving image: gcr.io/google-containers/addon-resizer:functional-259653
	I0131 14:17:43.472664  161863 image.go:173] found gcr.io/google-containers/addon-resizer:functional-259653 locally: &{ref:{Repository:{Registry:{insecure:false registry:gcr.io} repository:google-containers/addon-resizer} tag:functional-259653 original:gcr.io/google-containers/addon-resizer:functional-259653} opener:0xc00053a000 tarballImage:<nil> computed:false id:0xc0002aa0a0 configFile:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I0131 14:17:43.472706  161863 cache.go:162] opening:  /home/jenkins/minikube-integration/18007-117277/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-259653
	I0131 14:17:44.243817  161863 cache.go:96] cache image "gcr.io/google-containers/addon-resizer:functional-259653" -> "/home/jenkins/minikube-integration/18007-117277/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-259653" took 772.952234ms
	I0131 14:17:44.246345  161863 out.go:177] 
	W0131 14:17:44.247781  161863 out.go:239] X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18007-117277/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-259653": write: unable to calculate manifest: blob sha256:df65ec24e31e9052f40143a6c297f81013842ab5813fa9c8d8da20a43938ad9e not found
	X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18007-117277/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-259653": write: unable to calculate manifest: blob sha256:df65ec24e31e9052f40143a6c297f81013842ab5813fa9c8d8da20a43938ad9e not found
	W0131 14:17:44.247803  161863 out.go:239] * 
	* 
	W0131 14:17:44.261961  161863 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0131 14:17:44.263896  161863 out.go:177] 

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.041991897s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-259653
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 image load --daemon gcr.io/google-containers/addon-resizer:functional-259653 --alsologtostderr
functional_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-259653 image load --daemon gcr.io/google-containers/addon-resizer:functional-259653 --alsologtostderr: exit status 80 (1.16370236s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0131 14:17:45.423359  162596 out.go:296] Setting OutFile to fd 1 ...
	I0131 14:17:45.423553  162596 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 14:17:45.423566  162596 out.go:309] Setting ErrFile to fd 2...
	I0131 14:17:45.423574  162596 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 14:17:45.423889  162596 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-117277/.minikube/bin
	I0131 14:17:45.424718  162596 config.go:182] Loaded profile config "functional-259653": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0131 14:17:45.424808  162596 cache.go:107] acquiring lock: {Name:mk5f3a97d3304748be111ef6acab78c5c29dc8f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0131 14:17:45.424904  162596 image.go:134] retrieving image: gcr.io/google-containers/addon-resizer:functional-259653
	I0131 14:17:45.426958  162596 image.go:173] found gcr.io/google-containers/addon-resizer:functional-259653 locally: &{ref:{Repository:{Registry:{insecure:false registry:gcr.io} repository:google-containers/addon-resizer} tag:functional-259653 original:gcr.io/google-containers/addon-resizer:functional-259653} opener:0xc000554000 tarballImage:<nil> computed:false id:0xc000898080 configFile:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I0131 14:17:45.427003  162596 cache.go:162] opening:  /home/jenkins/minikube-integration/18007-117277/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-259653
	I0131 14:17:46.487937  162596 cache.go:96] cache image "gcr.io/google-containers/addon-resizer:functional-259653" -> "/home/jenkins/minikube-integration/18007-117277/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-259653" took 1.06314393s
	I0131 14:17:46.492563  162596 out.go:177] 
	W0131 14:17:46.494108  162596 out.go:239] X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18007-117277/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-259653": write: unable to calculate manifest: blob sha256:f3896f083e92c804887811c3ec1e7c7e38dd72e96aec843c52a5af3fd81d0e6a not found
	X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18007-117277/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-259653": write: unable to calculate manifest: blob sha256:f3896f083e92c804887811c3ec1e7c7e38dd72e96aec843c52a5af3fd81d0e6a not found
	W0131 14:17:46.494146  162596 out.go:239] * 
	* 
	W0131 14:17:46.504364  162596 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0131 14:17:46.506560  162596 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:246: loading image into minikube from daemon: exit status 80

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0131 14:17:45.423359  162596 out.go:296] Setting OutFile to fd 1 ...
	I0131 14:17:45.423553  162596 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 14:17:45.423566  162596 out.go:309] Setting ErrFile to fd 2...
	I0131 14:17:45.423574  162596 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 14:17:45.423889  162596 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-117277/.minikube/bin
	I0131 14:17:45.424718  162596 config.go:182] Loaded profile config "functional-259653": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0131 14:17:45.424808  162596 cache.go:107] acquiring lock: {Name:mk5f3a97d3304748be111ef6acab78c5c29dc8f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0131 14:17:45.424904  162596 image.go:134] retrieving image: gcr.io/google-containers/addon-resizer:functional-259653
	I0131 14:17:45.426958  162596 image.go:173] found gcr.io/google-containers/addon-resizer:functional-259653 locally: &{ref:{Repository:{Registry:{insecure:false registry:gcr.io} repository:google-containers/addon-resizer} tag:functional-259653 original:gcr.io/google-containers/addon-resizer:functional-259653} opener:0xc000554000 tarballImage:<nil> computed:false id:0xc000898080 configFile:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I0131 14:17:45.427003  162596 cache.go:162] opening:  /home/jenkins/minikube-integration/18007-117277/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-259653
	I0131 14:17:46.487937  162596 cache.go:96] cache image "gcr.io/google-containers/addon-resizer:functional-259653" -> "/home/jenkins/minikube-integration/18007-117277/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-259653" took 1.06314393s
	I0131 14:17:46.492563  162596 out.go:177] 
	W0131 14:17:46.494108  162596 out.go:239] X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18007-117277/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-259653": write: unable to calculate manifest: blob sha256:f3896f083e92c804887811c3ec1e7c7e38dd72e96aec843c52a5af3fd81d0e6a not found
	X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18007-117277/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-259653": write: unable to calculate manifest: blob sha256:f3896f083e92c804887811c3ec1e7c7e38dd72e96aec843c52a5af3fd81d0e6a not found
	W0131 14:17:46.494146  162596 out.go:239] * 
	* 
	W0131 14:17:46.504364  162596 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0131 14:17:46.506560  162596 out.go:177] 

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 image save gcr.io/google-containers/addon-resizer:functional-259653 /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
E0131 14:17:47.881415  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/addons-214491/client.crt: no such file or directory
functional_test.go:410: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I0131 14:17:47.786809  162838 out.go:296] Setting OutFile to fd 1 ...
	I0131 14:17:47.787066  162838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 14:17:47.787074  162838 out.go:309] Setting ErrFile to fd 2...
	I0131 14:17:47.787079  162838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 14:17:47.787263  162838 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-117277/.minikube/bin
	I0131 14:17:47.787858  162838 config.go:182] Loaded profile config "functional-259653": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0131 14:17:47.787975  162838 config.go:182] Loaded profile config "functional-259653": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0131 14:17:47.788413  162838 cli_runner.go:164] Run: docker container inspect functional-259653 --format={{.State.Status}}
	I0131 14:17:47.806736  162838 ssh_runner.go:195] Run: systemctl --version
	I0131 14:17:47.806824  162838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-259653
	I0131 14:17:47.827036  162838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18007-117277/.minikube/machines/functional-259653/id_rsa Username:docker}
	I0131 14:17:47.952065  162838 cache_images.go:286] Loading image from: /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar
	W0131 14:17:47.952133  162838 cache_images.go:254] Failed to load cached images for profile functional-259653. make sure the profile is running. loading images: stat /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar: no such file or directory
	I0131 14:17:47.952157  162838 cache_images.go:262] succeeded pushing to: 
	I0131 14:17:47.952166  162838 cache_images.go:263] failed pushing to: functional-259653

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-259653
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 image save --daemon gcr.io/google-containers/addon-resizer:functional-259653 --alsologtostderr
functional_test.go:423: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-259653 image save --daemon gcr.io/google-containers/addon-resizer:functional-259653 --alsologtostderr: exit status 80 (666.765583ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0131 14:17:48.045933  162880 out.go:296] Setting OutFile to fd 1 ...
	I0131 14:17:48.046148  162880 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 14:17:48.046163  162880 out.go:309] Setting ErrFile to fd 2...
	I0131 14:17:48.046170  162880 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 14:17:48.046454  162880 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-117277/.minikube/bin
	I0131 14:17:48.047126  162880 config.go:182] Loaded profile config "functional-259653": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0131 14:17:48.047178  162880 cache_images.go:396] Save images: ["gcr.io/google-containers/addon-resizer:functional-259653"]
	I0131 14:17:48.047294  162880 config.go:182] Loaded profile config "functional-259653": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0131 14:17:48.047757  162880 cli_runner.go:164] Run: docker container inspect functional-259653 --format={{.State.Status}}
	I0131 14:17:48.068422  162880 cache_images.go:341] SaveImages start: [gcr.io/google-containers/addon-resizer:functional-259653]
	I0131 14:17:48.068577  162880 ssh_runner.go:195] Run: systemctl --version
	I0131 14:17:48.068669  162880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-259653
	I0131 14:17:48.090704  162880 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18007-117277/.minikube/machines/functional-259653/id_rsa Username:docker}
	I0131 14:17:48.202379  162880 containerd.go:252] Checking existence of image with name "gcr.io/google-containers/addon-resizer:functional-259653" and sha ""
	I0131 14:17:48.202462  162880 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0131 14:17:48.623041  162880 cache_images.go:345] SaveImages completed in 554.584007ms
	W0131 14:17:48.623089  162880 cache_images.go:442] Failed to load cached images for profile functional-259653. make sure the profile is running. saving cached images: image gcr.io/google-containers/addon-resizer:functional-259653 not found
	I0131 14:17:48.623103  162880 cache_images.go:450] succeeded pulling from : 
	I0131 14:17:48.623108  162880 cache_images.go:451] failed pulling from : functional-259653
	I0131 14:17:48.626800  162880 out.go:177] 
	W0131 14:17:48.628353  162880 out.go:239] X Exiting due to GUEST_IMAGE_SAVE: Failed to save image: tarball: open /home/jenkins/minikube-integration/18007-117277/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-259653: no such file or directory
	X Exiting due to GUEST_IMAGE_SAVE: Failed to save image: tarball: open /home/jenkins/minikube-integration/18007-117277/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-259653: no such file or directory
	W0131 14:17:48.628378  162880 out.go:239] * 
	* 
	W0131 14:17:48.642580  162880 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_37523167baaa49a1ccfba2570a6a430d146b8afb_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_37523167baaa49a1ccfba2570a6a430d146b8afb_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0131 14:17:48.644478  162880 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:425: saving image from minikube to daemon: exit status 80

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0131 14:17:48.045933  162880 out.go:296] Setting OutFile to fd 1 ...
	I0131 14:17:48.046148  162880 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 14:17:48.046163  162880 out.go:309] Setting ErrFile to fd 2...
	I0131 14:17:48.046170  162880 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 14:17:48.046454  162880 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-117277/.minikube/bin
	I0131 14:17:48.047126  162880 config.go:182] Loaded profile config "functional-259653": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0131 14:17:48.047178  162880 cache_images.go:396] Save images: ["gcr.io/google-containers/addon-resizer:functional-259653"]
	I0131 14:17:48.047294  162880 config.go:182] Loaded profile config "functional-259653": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0131 14:17:48.047757  162880 cli_runner.go:164] Run: docker container inspect functional-259653 --format={{.State.Status}}
	I0131 14:17:48.068422  162880 cache_images.go:341] SaveImages start: [gcr.io/google-containers/addon-resizer:functional-259653]
	I0131 14:17:48.068577  162880 ssh_runner.go:195] Run: systemctl --version
	I0131 14:17:48.068669  162880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-259653
	I0131 14:17:48.090704  162880 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18007-117277/.minikube/machines/functional-259653/id_rsa Username:docker}
	I0131 14:17:48.202379  162880 containerd.go:252] Checking existence of image with name "gcr.io/google-containers/addon-resizer:functional-259653" and sha ""
	I0131 14:17:48.202462  162880 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0131 14:17:48.623041  162880 cache_images.go:345] SaveImages completed in 554.584007ms
	W0131 14:17:48.623089  162880 cache_images.go:442] Failed to load cached images for profile functional-259653. make sure the profile is running. saving cached images: image gcr.io/google-containers/addon-resizer:functional-259653 not found
	I0131 14:17:48.623103  162880 cache_images.go:450] succeeded pulling from : 
	I0131 14:17:48.623108  162880 cache_images.go:451] failed pulling from : functional-259653
	I0131 14:17:48.626800  162880 out.go:177] 
	W0131 14:17:48.628353  162880 out.go:239] X Exiting due to GUEST_IMAGE_SAVE: Failed to save image: tarball: open /home/jenkins/minikube-integration/18007-117277/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-259653: no such file or directory
	X Exiting due to GUEST_IMAGE_SAVE: Failed to save image: tarball: open /home/jenkins/minikube-integration/18007-117277/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-259653: no such file or directory
	W0131 14:17:48.628378  162880 out.go:239] * 
	* 
	W0131 14:17:48.642580  162880 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_37523167baaa49a1ccfba2570a6a430d146b8afb_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_37523167baaa49a1ccfba2570a6a430d146b8afb_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0131 14:17:48.644478  162880 out.go:177] 

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.69s)

                                                
                                    

Test pass (286/320)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 7.68
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.09
9 TestDownloadOnly/v1.16.0/DeleteAll 0.24
10 TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds 0.16
12 TestDownloadOnly/v1.28.4/json-events 5.02
13 TestDownloadOnly/v1.28.4/preload-exists 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.09
18 TestDownloadOnly/v1.28.4/DeleteAll 0.24
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.16
21 TestDownloadOnly/v1.29.0-rc.2/json-events 5.15
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.09
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.24
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.16
29 TestDownloadOnlyKic 1.34
30 TestBinaryMirror 0.78
31 TestOffline 62.84
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 106.73
38 TestAddons/parallel/Registry 14.7
39 TestAddons/parallel/Ingress 19.42
40 TestAddons/parallel/InspektorGadget 11.24
41 TestAddons/parallel/MetricsServer 6.71
42 TestAddons/parallel/HelmTiller 11.6
44 TestAddons/parallel/CSI 111.01
45 TestAddons/parallel/Headlamp 11.19
46 TestAddons/parallel/CloudSpanner 6.52
47 TestAddons/parallel/LocalPath 52.95
49 TestAddons/parallel/Yakd 5.01
52 TestAddons/serial/GCPAuth/Namespaces 0.2
53 TestAddons/StoppedEnableDisable 12.18
54 TestCertOptions 38.29
55 TestCertExpiration 221.05
57 TestForceSystemdFlag 35.71
58 TestForceSystemdEnv 36.06
59 TestDockerEnvContainerd 37.54
60 TestKVMDriverInstallOrUpdate 3.22
64 TestErrorSpam/setup 24.25
65 TestErrorSpam/start 0.69
66 TestErrorSpam/status 0.96
67 TestErrorSpam/pause 1.64
68 TestErrorSpam/unpause 1.63
69 TestErrorSpam/stop 1.46
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 53.64
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 5.11
76 TestFunctional/serial/KubeContext 0.05
77 TestFunctional/serial/KubectlGetPods 0.06
80 TestFunctional/serial/CacheCmd/cache/add_remote 3.16
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
83 TestFunctional/serial/CacheCmd/cache/list 0.07
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
85 TestFunctional/serial/CacheCmd/cache/cache_reload 1.91
86 TestFunctional/serial/CacheCmd/cache/delete 0.14
87 TestFunctional/serial/MinikubeKubectlCmd 0.13
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
89 TestFunctional/serial/ExtraConfig 40.35
90 TestFunctional/serial/ComponentHealth 0.07
91 TestFunctional/serial/LogsCmd 1.49
92 TestFunctional/serial/LogsFileCmd 1.54
93 TestFunctional/serial/InvalidService 4.81
95 TestFunctional/parallel/ConfigCmd 0.53
96 TestFunctional/parallel/DashboardCmd 14.3
97 TestFunctional/parallel/DryRun 0.47
98 TestFunctional/parallel/InternationalLanguage 0.18
99 TestFunctional/parallel/StatusCmd 0.97
103 TestFunctional/parallel/ServiceCmdConnect 13.74
104 TestFunctional/parallel/AddonsCmd 0.18
105 TestFunctional/parallel/PersistentVolumeClaim 36.3
107 TestFunctional/parallel/SSHCmd 0.7
108 TestFunctional/parallel/CpCmd 2.02
109 TestFunctional/parallel/MySQL 20.45
110 TestFunctional/parallel/FileSync 0.33
111 TestFunctional/parallel/CertSync 2.1
115 TestFunctional/parallel/NodeLabels 0.09
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.68
119 TestFunctional/parallel/License 0.19
120 TestFunctional/parallel/Version/short 0.07
121 TestFunctional/parallel/Version/components 0.65
122 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
123 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
124 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
125 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
126 TestFunctional/parallel/ImageCommands/ImageBuild 2.61
127 TestFunctional/parallel/ImageCommands/Setup 1.02
128 TestFunctional/parallel/UpdateContextCmd/no_changes 0.17
129 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.16
130 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.16
135 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.58
136 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
138 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 15.27
140 TestFunctional/parallel/ImageCommands/ImageRemove 0.59
143 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
144 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
148 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
149 TestFunctional/parallel/ServiceCmd/DeployApp 8.14
150 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
151 TestFunctional/parallel/MountCmd/any-port 7.78
152 TestFunctional/parallel/ProfileCmd/profile_list 0.38
153 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
154 TestFunctional/parallel/ServiceCmd/List 1.75
155 TestFunctional/parallel/ServiceCmd/JSONOutput 1.76
156 TestFunctional/parallel/MountCmd/specific-port 2.18
157 TestFunctional/parallel/ServiceCmd/HTTPS 0.66
158 TestFunctional/parallel/MountCmd/VerifyCleanup 2.01
159 TestFunctional/parallel/ServiceCmd/Format 0.78
160 TestFunctional/parallel/ServiceCmd/URL 0.62
161 TestFunctional/delete_addon-resizer_images 0.07
162 TestFunctional/delete_my-image_image 0.02
163 TestFunctional/delete_minikube_cached_images 0.02
167 TestIngressAddonLegacy/StartLegacyK8sCluster 69.16
169 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 8.81
170 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.59
171 TestIngressAddonLegacy/serial/ValidateIngressAddons 35.37
174 TestJSONOutput/start/Command 46.93
175 TestJSONOutput/start/Audit 0
177 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
180 TestJSONOutput/pause/Command 0.68
181 TestJSONOutput/pause/Audit 0
183 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/unpause/Command 0.63
187 TestJSONOutput/unpause/Audit 0
189 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/stop/Command 5.78
193 TestJSONOutput/stop/Audit 0
195 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
197 TestErrorJSONOutput 0.25
199 TestKicCustomNetwork/create_custom_network 33.6
200 TestKicCustomNetwork/use_default_bridge_network 29.02
201 TestKicExistingNetwork 28.54
202 TestKicCustomSubnet 27.8
203 TestKicStaticIP 25.09
204 TestMainNoArgs 0.07
205 TestMinikubeProfile 52.17
208 TestMountStart/serial/StartWithMountFirst 7.83
209 TestMountStart/serial/VerifyMountFirst 0.28
210 TestMountStart/serial/StartWithMountSecond 7.94
211 TestMountStart/serial/VerifyMountSecond 0.28
212 TestMountStart/serial/DeleteFirst 1.63
213 TestMountStart/serial/VerifyMountPostDelete 0.27
214 TestMountStart/serial/Stop 1.2
215 TestMountStart/serial/RestartStopped 6.87
216 TestMountStart/serial/VerifyMountPostStop 0.27
219 TestMultiNode/serial/FreshStart2Nodes 72.78
220 TestMultiNode/serial/DeployApp2Nodes 3.53
221 TestMultiNode/serial/PingHostFrom2Pods 0.86
222 TestMultiNode/serial/AddNode 15.51
223 TestMultiNode/serial/MultiNodeLabels 0.06
224 TestMultiNode/serial/ProfileList 0.3
225 TestMultiNode/serial/CopyFile 9.92
226 TestMultiNode/serial/StopNode 2.16
227 TestMultiNode/serial/StartAfterStop 11.15
228 TestMultiNode/serial/RestartKeepsNodes 115.69
229 TestMultiNode/serial/DeleteNode 4.74
230 TestMultiNode/serial/StopMultiNode 23.83
231 TestMultiNode/serial/RestartMultiNode 79.22
232 TestMultiNode/serial/ValidateNameConflict 27.21
237 TestPreload 116.33
239 TestScheduledStopUnix 97.42
242 TestInsufficientStorage 13.24
243 TestRunningBinaryUpgrade 68.65
245 TestKubernetesUpgrade 349.48
246 TestMissingContainerUpgrade 137.78
248 TestStoppedBinaryUpgrade/Setup 0.54
249 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
250 TestNoKubernetes/serial/StartWithK8s 36.16
251 TestStoppedBinaryUpgrade/Upgrade 184.24
252 TestNoKubernetes/serial/StartWithStopK8s 16.1
253 TestNoKubernetes/serial/Start 7.35
254 TestNoKubernetes/serial/VerifyK8sNotRunning 0.31
255 TestNoKubernetes/serial/ProfileList 8.99
263 TestNetworkPlugins/group/false 3.96
264 TestNoKubernetes/serial/Stop 1.23
265 TestNoKubernetes/serial/StartNoArgs 6.34
269 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
277 TestStoppedBinaryUpgrade/MinikubeLogs 1
279 TestPause/serial/Start 56.62
280 TestNetworkPlugins/group/auto/Start 52.57
281 TestPause/serial/SecondStartNoReconfiguration 4.94
282 TestPause/serial/Pause 0.74
283 TestPause/serial/VerifyStatus 0.33
284 TestPause/serial/Unpause 0.61
285 TestPause/serial/PauseAgain 0.85
286 TestPause/serial/DeletePaused 2.56
287 TestPause/serial/VerifyDeletedResources 13.73
288 TestNetworkPlugins/group/auto/KubeletFlags 0.3
289 TestNetworkPlugins/group/auto/NetCatPod 9.19
290 TestNetworkPlugins/group/auto/DNS 0.14
291 TestNetworkPlugins/group/auto/Localhost 0.11
292 TestNetworkPlugins/group/auto/HairPin 0.11
293 TestNetworkPlugins/group/kindnet/Start 51.2
294 TestNetworkPlugins/group/calico/Start 70.53
295 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
296 TestNetworkPlugins/group/kindnet/KubeletFlags 0.35
297 TestNetworkPlugins/group/kindnet/NetCatPod 8.22
298 TestNetworkPlugins/group/custom-flannel/Start 52.03
299 TestNetworkPlugins/group/kindnet/DNS 0.17
300 TestNetworkPlugins/group/kindnet/Localhost 0.14
301 TestNetworkPlugins/group/kindnet/HairPin 0.15
302 TestNetworkPlugins/group/enable-default-cni/Start 38.23
303 TestNetworkPlugins/group/calico/ControllerPod 6.01
304 TestNetworkPlugins/group/calico/KubeletFlags 0.31
305 TestNetworkPlugins/group/calico/NetCatPod 9.2
306 TestNetworkPlugins/group/calico/DNS 0.18
307 TestNetworkPlugins/group/calico/Localhost 0.15
308 TestNetworkPlugins/group/calico/HairPin 0.13
309 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
310 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.19
311 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.33
312 TestNetworkPlugins/group/custom-flannel/DNS 0.15
313 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
314 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.24
315 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
316 TestNetworkPlugins/group/flannel/Start 56.34
317 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
318 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
319 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
320 TestNetworkPlugins/group/bridge/Start 41.56
322 TestStartStop/group/old-k8s-version/serial/FirstStart 121.89
323 TestNetworkPlugins/group/flannel/ControllerPod 6.01
324 TestNetworkPlugins/group/flannel/KubeletFlags 0.32
325 TestNetworkPlugins/group/bridge/KubeletFlags 0.34
326 TestNetworkPlugins/group/flannel/NetCatPod 9.22
327 TestNetworkPlugins/group/bridge/NetCatPod 9.19
328 TestNetworkPlugins/group/flannel/DNS 0.15
329 TestNetworkPlugins/group/flannel/Localhost 0.14
330 TestNetworkPlugins/group/bridge/DNS 0.18
331 TestNetworkPlugins/group/flannel/HairPin 0.15
332 TestNetworkPlugins/group/bridge/Localhost 0.15
333 TestNetworkPlugins/group/bridge/HairPin 0.14
335 TestStartStop/group/no-preload/serial/FirstStart 64.94
337 TestStartStop/group/embed-certs/serial/FirstStart 54.68
339 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 53.33
340 TestStartStop/group/embed-certs/serial/DeployApp 7.23
341 TestStartStop/group/old-k8s-version/serial/DeployApp 7.38
342 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.01
343 TestStartStop/group/embed-certs/serial/Stop 11.93
344 TestStartStop/group/no-preload/serial/DeployApp 8.25
345 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 7.26
346 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.82
347 TestStartStop/group/old-k8s-version/serial/Stop 11.89
348 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.93
349 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.88
350 TestStartStop/group/default-k8s-diff-port/serial/Stop 15.68
351 TestStartStop/group/no-preload/serial/Stop 15.02
352 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
353 TestStartStop/group/embed-certs/serial/SecondStart 340.13
354 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
355 TestStartStop/group/old-k8s-version/serial/SecondStart 64.26
356 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
357 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
358 TestStartStop/group/no-preload/serial/SecondStart 336.94
359 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 329.43
360 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 34.01
361 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
362 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
363 TestStartStop/group/old-k8s-version/serial/Pause 2.75
365 TestStartStop/group/newest-cni/serial/FirstStart 32.16
366 TestStartStop/group/newest-cni/serial/DeployApp 0
367 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.07
368 TestStartStop/group/newest-cni/serial/Stop 1.2
369 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
370 TestStartStop/group/newest-cni/serial/SecondStart 25.64
371 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
372 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
373 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
374 TestStartStop/group/newest-cni/serial/Pause 2.64
375 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
376 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 9.01
377 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.08
378 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 14.01
379 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
380 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
381 TestStartStop/group/embed-certs/serial/Pause 2.95
382 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.32
383 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.9
384 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
385 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
386 TestStartStop/group/no-preload/serial/Pause 2.68
x
+
TestDownloadOnly/v1.16.0/json-events (7.68s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-256653 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-256653 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (7.681506059s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (7.68s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-256653
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-256653: exit status 85 (87.303807ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-256653 | jenkins | v1.32.0 | 31 Jan 24 14:09 UTC |          |
	|         | -p download-only-256653        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/31 14:09:58
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0131 14:09:58.306532  124070 out.go:296] Setting OutFile to fd 1 ...
	I0131 14:09:58.306726  124070 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 14:09:58.306738  124070 out.go:309] Setting ErrFile to fd 2...
	I0131 14:09:58.306742  124070 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 14:09:58.306952  124070 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-117277/.minikube/bin
	W0131 14:09:58.307093  124070 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18007-117277/.minikube/config/config.json: open /home/jenkins/minikube-integration/18007-117277/.minikube/config/config.json: no such file or directory
	I0131 14:09:58.307703  124070 out.go:303] Setting JSON to true
	I0131 14:09:58.308680  124070 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":67951,"bootTime":1706642248,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0131 14:09:58.308751  124070 start.go:138] virtualization: kvm guest
	I0131 14:09:58.311439  124070 out.go:97] [download-only-256653] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0131 14:09:58.312990  124070 out.go:169] MINIKUBE_LOCATION=18007
	W0131 14:09:58.311588  124070 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/18007-117277/.minikube/cache/preloaded-tarball: no such file or directory
	I0131 14:09:58.311647  124070 notify.go:220] Checking for updates...
	I0131 14:09:58.315704  124070 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0131 14:09:58.316969  124070 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18007-117277/kubeconfig
	I0131 14:09:58.318169  124070 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18007-117277/.minikube
	I0131 14:09:58.319355  124070 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0131 14:09:58.321854  124070 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0131 14:09:58.322180  124070 driver.go:392] Setting default libvirt URI to qemu:///system
	I0131 14:09:58.347125  124070 docker.go:122] docker version: linux-25.0.1:Docker Engine - Community
	I0131 14:09:58.347246  124070 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0131 14:09:58.405250  124070 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:58 SystemTime:2024-01-31 14:09:58.39457657 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:25.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0131 14:09:58.405407  124070 docker.go:295] overlay module found
	I0131 14:09:58.408016  124070 out.go:97] Using the docker driver based on user configuration
	I0131 14:09:58.408090  124070 start.go:298] selected driver: docker
	I0131 14:09:58.408098  124070 start.go:902] validating driver "docker" against <nil>
	I0131 14:09:58.408206  124070 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0131 14:09:58.464564  124070 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:58 SystemTime:2024-01-31 14:09:58.454775701 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:25.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0131 14:09:58.464744  124070 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0131 14:09:58.465254  124070 start_flags.go:392] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0131 14:09:58.465394  124070 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0131 14:09:58.467445  124070 out.go:169] Using Docker driver with root privileges
	I0131 14:09:58.468766  124070 cni.go:84] Creating CNI manager for ""
	I0131 14:09:58.468778  124070 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0131 14:09:58.468786  124070 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0131 14:09:58.468816  124070 start_flags.go:321] config:
	{Name:download-only-256653 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-256653 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 14:09:58.470486  124070 out.go:97] Starting control plane node download-only-256653 in cluster download-only-256653
	I0131 14:09:58.470505  124070 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0131 14:09:58.471906  124070 out.go:97] Pulling base image v0.0.42-1704759386-17866 ...
	I0131 14:09:58.471934  124070 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0131 14:09:58.472088  124070 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0131 14:09:58.488539  124070 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0131 14:09:58.488731  124070 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0131 14:09:58.488815  124070 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0131 14:09:58.502745  124070 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0131 14:09:58.502774  124070 cache.go:56] Caching tarball of preloaded images
	I0131 14:09:58.502925  124070 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0131 14:09:58.505118  124070 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0131 14:09:58.505132  124070 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0131 14:09:58.542236  124070 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:d96a2b2afa188e17db7ddabb58d563fd -> /home/jenkins/minikube-integration/18007-117277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0131 14:10:02.162777  124070 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 as a tarball
	I0131 14:10:02.913678  124070 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0131 14:10:02.913783  124070 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18007-117277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-256653"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.16.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-256653
--- PASS: TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (5.02s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-389052 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-389052 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.022610407s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (5.02s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-389052
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-389052: exit status 85 (88.246763ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-256653 | jenkins | v1.32.0 | 31 Jan 24 14:09 UTC |                     |
	|         | -p download-only-256653        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 31 Jan 24 14:10 UTC | 31 Jan 24 14:10 UTC |
	| delete  | -p download-only-256653        | download-only-256653 | jenkins | v1.32.0 | 31 Jan 24 14:10 UTC | 31 Jan 24 14:10 UTC |
	| start   | -o=json --download-only        | download-only-389052 | jenkins | v1.32.0 | 31 Jan 24 14:10 UTC |                     |
	|         | -p download-only-389052        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/31 14:10:06
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0131 14:10:06.475058  124355 out.go:296] Setting OutFile to fd 1 ...
	I0131 14:10:06.475348  124355 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 14:10:06.475358  124355 out.go:309] Setting ErrFile to fd 2...
	I0131 14:10:06.475366  124355 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 14:10:06.475586  124355 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-117277/.minikube/bin
	I0131 14:10:06.476196  124355 out.go:303] Setting JSON to true
	I0131 14:10:06.477233  124355 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":67959,"bootTime":1706642248,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0131 14:10:06.477307  124355 start.go:138] virtualization: kvm guest
	I0131 14:10:06.479554  124355 out.go:97] [download-only-389052] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0131 14:10:06.481277  124355 out.go:169] MINIKUBE_LOCATION=18007
	I0131 14:10:06.479743  124355 notify.go:220] Checking for updates...
	I0131 14:10:06.484396  124355 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0131 14:10:06.485697  124355 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18007-117277/kubeconfig
	I0131 14:10:06.486959  124355 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18007-117277/.minikube
	I0131 14:10:06.488247  124355 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0131 14:10:06.490911  124355 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0131 14:10:06.491238  124355 driver.go:392] Setting default libvirt URI to qemu:///system
	I0131 14:10:06.517993  124355 docker.go:122] docker version: linux-25.0.1:Docker Engine - Community
	I0131 14:10:06.518112  124355 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0131 14:10:06.571328  124355 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:52 SystemTime:2024-01-31 14:10:06.560681391 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:25.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0131 14:10:06.571452  124355 docker.go:295] overlay module found
	I0131 14:10:06.573639  124355 out.go:97] Using the docker driver based on user configuration
	I0131 14:10:06.573681  124355 start.go:298] selected driver: docker
	I0131 14:10:06.573688  124355 start.go:902] validating driver "docker" against <nil>
	I0131 14:10:06.573842  124355 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0131 14:10:06.627493  124355 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:52 SystemTime:2024-01-31 14:10:06.618178128 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:25.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0131 14:10:06.627692  124355 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0131 14:10:06.628179  124355 start_flags.go:392] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0131 14:10:06.628676  124355 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0131 14:10:06.631063  124355 out.go:169] Using Docker driver with root privileges
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-389052"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-389052
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (5.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-755607 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-755607 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.146287763s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (5.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-755607
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-755607: exit status 85 (88.968403ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-256653 | jenkins | v1.32.0 | 31 Jan 24 14:09 UTC |                     |
	|         | -p download-only-256653           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 31 Jan 24 14:10 UTC | 31 Jan 24 14:10 UTC |
	| delete  | -p download-only-256653           | download-only-256653 | jenkins | v1.32.0 | 31 Jan 24 14:10 UTC | 31 Jan 24 14:10 UTC |
	| start   | -o=json --download-only           | download-only-389052 | jenkins | v1.32.0 | 31 Jan 24 14:10 UTC |                     |
	|         | -p download-only-389052           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 31 Jan 24 14:10 UTC | 31 Jan 24 14:10 UTC |
	| delete  | -p download-only-389052           | download-only-389052 | jenkins | v1.32.0 | 31 Jan 24 14:10 UTC | 31 Jan 24 14:10 UTC |
	| start   | -o=json --download-only           | download-only-755607 | jenkins | v1.32.0 | 31 Jan 24 14:10 UTC |                     |
	|         | -p download-only-755607           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/31 14:10:11
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0131 14:10:11.989049  124639 out.go:296] Setting OutFile to fd 1 ...
	I0131 14:10:11.989369  124639 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 14:10:11.989381  124639 out.go:309] Setting ErrFile to fd 2...
	I0131 14:10:11.989388  124639 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 14:10:11.989635  124639 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-117277/.minikube/bin
	I0131 14:10:11.990300  124639 out.go:303] Setting JSON to true
	I0131 14:10:11.991373  124639 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":67964,"bootTime":1706642248,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0131 14:10:11.991443  124639 start.go:138] virtualization: kvm guest
	I0131 14:10:11.993868  124639 out.go:97] [download-only-755607] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0131 14:10:11.995385  124639 out.go:169] MINIKUBE_LOCATION=18007
	I0131 14:10:11.994074  124639 notify.go:220] Checking for updates...
	I0131 14:10:11.998081  124639 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0131 14:10:11.999484  124639 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18007-117277/kubeconfig
	I0131 14:10:12.000813  124639 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18007-117277/.minikube
	I0131 14:10:12.002324  124639 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0131 14:10:12.005078  124639 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0131 14:10:12.005426  124639 driver.go:392] Setting default libvirt URI to qemu:///system
	I0131 14:10:12.028546  124639 docker.go:122] docker version: linux-25.0.1:Docker Engine - Community
	I0131 14:10:12.028674  124639 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0131 14:10:12.083969  124639 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:51 SystemTime:2024-01-31 14:10:12.074385243 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:25.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0131 14:10:12.084126  124639 docker.go:295] overlay module found
	I0131 14:10:12.086157  124639 out.go:97] Using the docker driver based on user configuration
	I0131 14:10:12.086184  124639 start.go:298] selected driver: docker
	I0131 14:10:12.086190  124639 start.go:902] validating driver "docker" against <nil>
	I0131 14:10:12.086292  124639 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0131 14:10:12.137335  124639 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:51 SystemTime:2024-01-31 14:10:12.128085688 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:25.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0131 14:10:12.137557  124639 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0131 14:10:12.138067  124639 start_flags.go:392] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0131 14:10:12.138212  124639 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0131 14:10:12.140125  124639 out.go:169] Using Docker driver with root privileges
	I0131 14:10:12.141529  124639 cni.go:84] Creating CNI manager for ""
	I0131 14:10:12.141552  124639 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0131 14:10:12.141566  124639 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0131 14:10:12.141577  124639 start_flags.go:321] config:
	{Name:download-only-755607 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-755607 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 14:10:12.142976  124639 out.go:97] Starting control plane node download-only-755607 in cluster download-only-755607
	I0131 14:10:12.142992  124639 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0131 14:10:12.144185  124639 out.go:97] Pulling base image v0.0.42-1704759386-17866 ...
	I0131 14:10:12.144218  124639 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0131 14:10:12.144339  124639 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0131 14:10:12.160825  124639 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0131 14:10:12.160970  124639 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0131 14:10:12.160995  124639 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory, skipping pull
	I0131 14:10:12.161004  124639 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in cache, skipping pull
	I0131 14:10:12.161014  124639 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 as a tarball
	I0131 14:10:12.175737  124639 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4
	I0131 14:10:12.175778  124639 cache.go:56] Caching tarball of preloaded images
	I0131 14:10:12.175958  124639 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0131 14:10:12.177964  124639 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0131 14:10:12.177996  124639 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4 ...
	I0131 14:10:12.206489  124639 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4?checksum=md5:e143dbc3b8285cd3241a841ac2b6b7fc -> /home/jenkins/minikube-integration/18007-117277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4
	I0131 14:10:15.497147  124639 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4 ...
	I0131 14:10:15.497270  124639 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18007-117277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4 ...
	I0131 14:10:16.340066  124639 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on containerd
	I0131 14:10:16.340500  124639 profile.go:148] Saving config to /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/download-only-755607/config.json ...
	I0131 14:10:16.340553  124639 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/download-only-755607/config.json: {Name:mk1f867c23bf6ab375268089cd4ac65a8829f207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 14:10:16.340775  124639 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0131 14:10:16.340948  124639 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18007-117277/.minikube/cache/linux/amd64/v1.29.0-rc.2/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-755607"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-755607
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.34s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-773457 --alsologtostderr --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "download-docker-773457" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-773457
--- PASS: TestDownloadOnlyKic (1.34s)

                                                
                                    
x
+
TestBinaryMirror (0.78s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-953003 --alsologtostderr --binary-mirror http://127.0.0.1:43321 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-953003" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-953003
--- PASS: TestBinaryMirror (0.78s)

                                                
                                    
x
+
TestOffline (62.84s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-027148 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-027148 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd: (1m0.547128906s)
helpers_test.go:175: Cleaning up "offline-containerd-027148" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-027148
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-027148: (2.296652924s)
--- PASS: TestOffline (62.84s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-214491
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-214491: exit status 85 (73.851136ms)

                                                
                                                
-- stdout --
	* Profile "addons-214491" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-214491"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-214491
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-214491: exit status 85 (68.917423ms)

                                                
                                                
-- stdout --
	* Profile "addons-214491" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-214491"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (106.73s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-214491 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-214491 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (1m46.734043831s)
--- PASS: TestAddons/Setup (106.73s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.7s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 14.173436ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-d9s2q" [2871b25a-9352-469f-8a23-944ab9a8e387] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.006023326s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-q74f8" [df7b6b84-753f-4801-9602-60eb8519d1b6] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.006152262s
addons_test.go:340: (dbg) Run:  kubectl --context addons-214491 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-214491 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-214491 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.764461717s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-214491 ip
2024/01/31 14:12:20 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-214491 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.70s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-214491 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-214491 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-214491 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [d19ba1e5-4935-422e-b200-537dffb6e5a3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [d19ba1e5-4935-422e-b200-537dffb6e5a3] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004581301s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-214491 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-214491 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-214491 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-214491 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-214491 addons disable ingress-dns --alsologtostderr -v=1: (1.065316445s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-214491 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-214491 addons disable ingress --alsologtostderr -v=1: (7.93363792s)
--- PASS: TestAddons/parallel/Ingress (19.42s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.24s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-8j76m" [3faa022d-600c-4d90-ac8d-03a2e954bd13] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004925725s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-214491
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-214491: (6.233244986s)
--- PASS: TestAddons/parallel/InspektorGadget (11.24s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.71s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 15.046704ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-5z5sm" [f00d08cb-8ef5-4fb1-9a0f-bc55ce02a581] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.005979165s
addons_test.go:415: (dbg) Run:  kubectl --context addons-214491 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-214491 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.71s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.6s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 14.851039ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-fpdpc" [df50ab8f-7dde-4c15-ac50-a9d5d3ee508e] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.006207123s
addons_test.go:473: (dbg) Run:  kubectl --context addons-214491 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-214491 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.813912432s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-214491 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.60s)

                                                
                                    
x
+
TestAddons/parallel/CSI (111.01s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 15.280772ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-214491 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-214491 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [9226e647-deec-4562-af27-eb7a0734ff46] Pending
helpers_test.go:344: "task-pv-pod" [9226e647-deec-4562-af27-eb7a0734ff46] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [9226e647-deec-4562-af27-eb7a0734ff46] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.003919731s
addons_test.go:584: (dbg) Run:  kubectl --context addons-214491 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-214491 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-214491 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-214491 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-214491 delete pod task-pv-pod: (1.567969425s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-214491 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-214491 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-214491 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [198772fe-cf80-4577-8ce0-f89349604d06] Pending
helpers_test.go:344: "task-pv-pod-restore" [198772fe-cf80-4577-8ce0-f89349604d06] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [198772fe-cf80-4577-8ce0-f89349604d06] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003873516s
addons_test.go:626: (dbg) Run:  kubectl --context addons-214491 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-214491 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-214491 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-214491 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-214491 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.641335081s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-214491 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (111.01s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.19s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-214491 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-214491 --alsologtostderr -v=1: (1.184055286s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-svshz" [a099bf15-85fb-4327-b26a-1963201bf66b] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-svshz" [a099bf15-85fb-4327-b26a-1963201bf66b] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003976025s
--- PASS: TestAddons/parallel/Headlamp (11.19s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.52s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-64c8c85f65-44b5j" [328da404-b8ba-4b62-ba2e-cadc54f20d63] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003995256s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-214491
--- PASS: TestAddons/parallel/CloudSpanner (6.52s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.95s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-214491 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-214491 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214491 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [6c549690-d1fd-44de-8f12-06862bd4372a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [6c549690-d1fd-44de-8f12-06862bd4372a] Running
helpers_test.go:344: "test-local-path" [6c549690-d1fd-44de-8f12-06862bd4372a] Running: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [6c549690-d1fd-44de-8f12-06862bd4372a] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004507094s
addons_test.go:891: (dbg) Run:  kubectl --context addons-214491 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-214491 ssh "cat /opt/local-path-provisioner/pvc-b6ca553a-3812-43ca-8a9b-b1a71b4e3891_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-214491 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-214491 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-214491 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-amd64 -p addons-214491 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.042734921s)
--- PASS: TestAddons/parallel/LocalPath (52.95s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-tjtlr" [5a9eb10b-f00e-4023-a996-5b955436c77d] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004275534s
--- PASS: TestAddons/parallel/Yakd (5.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.2s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-214491 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-214491 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.20s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.18s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-214491
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-214491: (11.877380772s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-214491
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-214491
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-214491
--- PASS: TestAddons/StoppedEnableDisable (12.18s)

                                                
                                    
x
+
TestCertOptions (38.29s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-084476 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
E0131 14:37:06.918693  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/addons-214491/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-084476 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (35.667740781s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-084476 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-084476 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-084476 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-084476" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-084476
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-084476: (1.96725656s)
--- PASS: TestCertOptions (38.29s)

                                                
                                    
x
+
TestCertExpiration (221.05s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-071089 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-071089 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (33.749317202s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-071089 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-071089 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (5.037546704s)
helpers_test.go:175: Cleaning up "cert-expiration-071089" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-071089
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-071089: (2.263133892s)
--- PASS: TestCertExpiration (221.05s)

                                                
                                    
x
+
TestForceSystemdFlag (35.71s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-311497 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-311497 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (30.70679383s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-311497 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-311497" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-311497
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-311497: (4.68864859s)
--- PASS: TestForceSystemdFlag (35.71s)

                                                
                                    
x
+
TestForceSystemdEnv (36.06s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-513300 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-513300 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (30.658131628s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-513300 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-513300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-513300
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-513300: (5.040247134s)
--- PASS: TestForceSystemdEnv (36.06s)

                                                
                                    
x
+
TestDockerEnvContainerd (37.54s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux amd64
docker_test.go:181: (dbg) Run:  out/minikube-linux-amd64 start -p dockerenv-539142 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-amd64 start -p dockerenv-539142 --driver=docker  --container-runtime=containerd: (21.519218365s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-amd64 docker-env --ssh-host --ssh-add -p dockerenv-539142"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-amd64 docker-env --ssh-host --ssh-add -p dockerenv-539142": (1.11888241s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-YalpPBTkXbIC/agent.145615" SSH_AGENT_PID="145616" DOCKER_HOST=ssh://docker@127.0.0.1:32777 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-YalpPBTkXbIC/agent.145615" SSH_AGENT_PID="145616" DOCKER_HOST=ssh://docker@127.0.0.1:32777 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-YalpPBTkXbIC/agent.145615" SSH_AGENT_PID="145616" DOCKER_HOST=ssh://docker@127.0.0.1:32777 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.782053267s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-YalpPBTkXbIC/agent.145615" SSH_AGENT_PID="145616" DOCKER_HOST=ssh://docker@127.0.0.1:32777 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-539142" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p dockerenv-539142
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p dockerenv-539142: (1.94314581s)
--- PASS: TestDockerEnvContainerd (37.54s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.22s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.22s)

                                                
                                    
x
+
TestErrorSpam/setup (24.25s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-490069 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-490069 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-490069 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-490069 --driver=docker  --container-runtime=containerd: (24.252667018s)
--- PASS: TestErrorSpam/setup (24.25s)

                                                
                                    
x
+
TestErrorSpam/start (0.69s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-490069 --log_dir /tmp/nospam-490069 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-490069 --log_dir /tmp/nospam-490069 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-490069 --log_dir /tmp/nospam-490069 start --dry-run
--- PASS: TestErrorSpam/start (0.69s)

                                                
                                    
x
+
TestErrorSpam/status (0.96s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-490069 --log_dir /tmp/nospam-490069 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-490069 --log_dir /tmp/nospam-490069 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-490069 --log_dir /tmp/nospam-490069 status
--- PASS: TestErrorSpam/status (0.96s)

                                                
                                    
x
+
TestErrorSpam/pause (1.64s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-490069 --log_dir /tmp/nospam-490069 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-490069 --log_dir /tmp/nospam-490069 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-490069 --log_dir /tmp/nospam-490069 pause
--- PASS: TestErrorSpam/pause (1.64s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.63s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-490069 --log_dir /tmp/nospam-490069 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-490069 --log_dir /tmp/nospam-490069 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-490069 --log_dir /tmp/nospam-490069 unpause
--- PASS: TestErrorSpam/unpause (1.63s)

                                                
                                    
x
+
TestErrorSpam/stop (1.46s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-490069 --log_dir /tmp/nospam-490069 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-490069 --log_dir /tmp/nospam-490069 stop: (1.230838361s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-490069 --log_dir /tmp/nospam-490069 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-490069 --log_dir /tmp/nospam-490069 stop
--- PASS: TestErrorSpam/stop (1.46s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18007-117277/.minikube/files/etc/test/nested/copy/124059/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (53.64s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-259653 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-259653 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (53.63880326s)
--- PASS: TestFunctional/serial/StartWithProxy (53.64s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.11s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-259653 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-259653 --alsologtostderr -v=8: (5.109920552s)
functional_test.go:659: soft start took 5.110681028s for "functional-259653" cluster.
--- PASS: TestFunctional/serial/SoftStart (5.11s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-259653 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-259653 cache add registry.k8s.io/pause:3.1: (1.040975262s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-259653 cache add registry.k8s.io/pause:3.3: (1.118899534s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-259653 cache add registry.k8s.io/pause:latest: (1.002745996s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.91s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-259653 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (296.66366ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.91s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 kubectl -- --context functional-259653 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-259653 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.35s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-259653 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0131 14:17:06.919296  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/addons-214491/client.crt: no such file or directory
E0131 14:17:06.925026  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/addons-214491/client.crt: no such file or directory
E0131 14:17:06.935277  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/addons-214491/client.crt: no such file or directory
E0131 14:17:06.955531  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/addons-214491/client.crt: no such file or directory
E0131 14:17:06.995794  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/addons-214491/client.crt: no such file or directory
E0131 14:17:07.076143  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/addons-214491/client.crt: no such file or directory
E0131 14:17:07.236569  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/addons-214491/client.crt: no such file or directory
E0131 14:17:07.557133  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/addons-214491/client.crt: no such file or directory
E0131 14:17:08.198050  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/addons-214491/client.crt: no such file or directory
E0131 14:17:09.478696  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/addons-214491/client.crt: no such file or directory
E0131 14:17:12.039801  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/addons-214491/client.crt: no such file or directory
E0131 14:17:17.160008  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/addons-214491/client.crt: no such file or directory
E0131 14:17:27.400654  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/addons-214491/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-259653 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.352064719s)
functional_test.go:757: restart took 40.352194143s for "functional-259653" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (40.35s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-259653 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-259653 logs: (1.48695838s)
--- PASS: TestFunctional/serial/LogsCmd (1.49s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.54s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 logs --file /tmp/TestFunctionalserialLogsFileCmd3157491921/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-259653 logs --file /tmp/TestFunctionalserialLogsFileCmd3157491921/001/logs.txt: (1.536327424s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.54s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.81s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-259653 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-259653
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-259653: exit status 115 (374.579023ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31342 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-259653 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-259653 delete -f testdata/invalidsvc.yaml: (1.257010748s)
--- PASS: TestFunctional/serial/InvalidService (4.81s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-259653 config get cpus: exit status 14 (69.181834ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-259653 config get cpus: exit status 14 (91.330686ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-259653 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-259653 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 165657: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.30s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-259653 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-259653 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (217.268049ms)

                                                
                                                
-- stdout --
	* [functional-259653] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18007
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18007-117277/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18007-117277/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0131 14:18:04.975508  165166 out.go:296] Setting OutFile to fd 1 ...
	I0131 14:18:04.975700  165166 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 14:18:04.975713  165166 out.go:309] Setting ErrFile to fd 2...
	I0131 14:18:04.975721  165166 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 14:18:04.975969  165166 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-117277/.minikube/bin
	I0131 14:18:04.976772  165166 out.go:303] Setting JSON to false
	I0131 14:18:04.978473  165166 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":68437,"bootTime":1706642248,"procs":262,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0131 14:18:04.978572  165166 start.go:138] virtualization: kvm guest
	I0131 14:18:04.980778  165166 out.go:177] * [functional-259653] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0131 14:18:04.982603  165166 out.go:177]   - MINIKUBE_LOCATION=18007
	I0131 14:18:04.982631  165166 notify.go:220] Checking for updates...
	I0131 14:18:04.984130  165166 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0131 14:18:04.985673  165166 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18007-117277/kubeconfig
	I0131 14:18:04.987262  165166 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18007-117277/.minikube
	I0131 14:18:04.988498  165166 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0131 14:18:04.989868  165166 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0131 14:18:04.991880  165166 config.go:182] Loaded profile config "functional-259653": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0131 14:18:04.992670  165166 driver.go:392] Setting default libvirt URI to qemu:///system
	I0131 14:18:05.029629  165166 docker.go:122] docker version: linux-25.0.1:Docker Engine - Community
	I0131 14:18:05.029759  165166 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0131 14:18:05.106444  165166 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:59 SystemTime:2024-01-31 14:18:05.094889367 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:25.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0131 14:18:05.106597  165166 docker.go:295] overlay module found
	I0131 14:18:05.108902  165166 out.go:177] * Using the docker driver based on existing profile
	I0131 14:18:05.111611  165166 start.go:298] selected driver: docker
	I0131 14:18:05.111633  165166 start.go:902] validating driver "docker" against &{Name:functional-259653 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-259653 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 14:18:05.111756  165166 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0131 14:18:05.115806  165166 out.go:177] 
	W0131 14:18:05.118378  165166 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0131 14:18:05.119993  165166 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-259653 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-259653 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-259653 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (178.681463ms)

                                                
                                                
-- stdout --
	* [functional-259653] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18007
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18007-117277/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18007-117277/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0131 14:18:04.797371  165093 out.go:296] Setting OutFile to fd 1 ...
	I0131 14:18:04.797555  165093 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 14:18:04.797567  165093 out.go:309] Setting ErrFile to fd 2...
	I0131 14:18:04.797572  165093 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 14:18:04.797939  165093 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-117277/.minikube/bin
	I0131 14:18:04.799882  165093 out.go:303] Setting JSON to false
	I0131 14:18:04.801072  165093 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":68437,"bootTime":1706642248,"procs":261,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0131 14:18:04.801147  165093 start.go:138] virtualization: kvm guest
	I0131 14:18:04.803032  165093 out.go:177] * [functional-259653] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I0131 14:18:04.805745  165093 out.go:177]   - MINIKUBE_LOCATION=18007
	I0131 14:18:04.805770  165093 notify.go:220] Checking for updates...
	I0131 14:18:04.807085  165093 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0131 14:18:04.808566  165093 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18007-117277/kubeconfig
	I0131 14:18:04.809919  165093 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18007-117277/.minikube
	I0131 14:18:04.811177  165093 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0131 14:18:04.812342  165093 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0131 14:18:04.814087  165093 config.go:182] Loaded profile config "functional-259653": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0131 14:18:04.814743  165093 driver.go:392] Setting default libvirt URI to qemu:///system
	I0131 14:18:04.838569  165093 docker.go:122] docker version: linux-25.0.1:Docker Engine - Community
	I0131 14:18:04.838738  165093 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0131 14:18:04.894762  165093 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:59 SystemTime:2024-01-31 14:18:04.884221682 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:25.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0131 14:18:04.894931  165093 docker.go:295] overlay module found
	I0131 14:18:04.896907  165093 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0131 14:18:04.898243  165093 start.go:298] selected driver: docker
	I0131 14:18:04.898260  165093 start.go:902] validating driver "docker" against &{Name:functional-259653 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-259653 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 14:18:04.898382  165093 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0131 14:18:04.900675  165093 out.go:177] 
	W0131 14:18:04.902126  165093 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0131 14:18:04.903512  165093 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (13.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-259653 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-259653 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-8dg9t" [499a8ecd-051b-4edd-b23a-af3b732bd349] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-8dg9t" [499a8ecd-051b-4edd-b23a-af3b732bd349] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 13.004147595s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.49.2:31533
functional_test.go:1671: http://192.168.49.2:31533: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-8dg9t

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31533
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (13.74s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (36.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [f381291d-9535-437a-93e4-4dd641c7ab69] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.005192152s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-259653 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-259653 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-259653 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-259653 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [41ed104a-cf10-4649-89ff-0393c71e0faa] Pending
helpers_test.go:344: "sp-pod" [41ed104a-cf10-4649-89ff-0393c71e0faa] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [41ed104a-cf10-4649-89ff-0393c71e0faa] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.004676724s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-259653 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-259653 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-259653 delete -f testdata/storage-provisioner/pod.yaml: (3.451277353s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-259653 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8c27c70e-ff39-4979-8398-fbd7f2484f13] Pending
helpers_test.go:344: "sp-pod" [8c27c70e-ff39-4979-8398-fbd7f2484f13] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [8c27c70e-ff39-4979-8398-fbd7f2484f13] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003024305s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-259653 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (36.30s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 ssh -n functional-259653 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 cp functional-259653:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1937368698/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 ssh -n functional-259653 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 ssh -n functional-259653 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (20.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-259653 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-7njtw" [cbd4e6b6-ac14-42e4-9e04-bccc14e23639] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-7njtw" [cbd4e6b6-ac14-42e4-9e04-bccc14e23639] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 17.004467734s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-259653 exec mysql-859648c796-7njtw -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-259653 exec mysql-859648c796-7njtw -- mysql -ppassword -e "show databases;": exit status 1 (107.815747ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-259653 exec mysql-859648c796-7njtw -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-259653 exec mysql-859648c796-7njtw -- mysql -ppassword -e "show databases;": exit status 1 (118.416192ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-259653 exec mysql-859648c796-7njtw -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (20.45s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/124059/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 ssh "sudo cat /etc/test/nested/copy/124059/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/124059.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 ssh "sudo cat /etc/ssl/certs/124059.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/124059.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 ssh "sudo cat /usr/share/ca-certificates/124059.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/1240592.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 ssh "sudo cat /etc/ssl/certs/1240592.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/1240592.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 ssh "sudo cat /usr/share/ca-certificates/1240592.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.10s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-259653 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-259653 ssh "sudo systemctl is-active docker": exit status 1 (329.774596ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-259653 ssh "sudo systemctl is-active crio": exit status 1 (353.415549ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-259653 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-259653 image ls --format short --alsologtostderr:
I0131 14:18:14.938784  168244 out.go:296] Setting OutFile to fd 1 ...
I0131 14:18:14.939222  168244 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0131 14:18:14.939240  168244 out.go:309] Setting ErrFile to fd 2...
I0131 14:18:14.939247  168244 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0131 14:18:14.939577  168244 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-117277/.minikube/bin
I0131 14:18:14.940512  168244 config.go:182] Loaded profile config "functional-259653": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0131 14:18:14.940702  168244 config.go:182] Loaded profile config "functional-259653": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0131 14:18:14.941433  168244 cli_runner.go:164] Run: docker container inspect functional-259653 --format={{.State.Status}}
I0131 14:18:14.962920  168244 ssh_runner.go:195] Run: systemctl --version
I0131 14:18:14.962996  168244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-259653
I0131 14:18:14.982406  168244 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18007-117277/.minikube/machines/functional-259653/id_rsa Username:docker}
I0131 14:18:15.082652  168244 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-259653 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/storage-provisioner | v5                 | sha256:6e38f4 | 9.06MB |
| registry.k8s.io/kube-controller-manager | v1.28.4            | sha256:d058aa | 33.4MB |
| registry.k8s.io/kube-proxy              | v1.28.4            | sha256:83f6cc | 24.6MB |
| registry.k8s.io/kube-scheduler          | v1.28.4            | sha256:e3db31 | 18.8MB |
| registry.k8s.io/pause                   | 3.1                | sha256:da86e6 | 315kB  |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | sha256:c7d129 | 27.7MB |
| registry.k8s.io/pause                   | 3.3                | sha256:0184c1 | 298kB  |
| registry.k8s.io/pause                   | latest             | sha256:350b16 | 72.3kB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | sha256:ead0a4 | 16.2MB |
| docker.io/library/mysql                 | 5.7                | sha256:510733 | 138MB  |
| docker.io/library/nginx                 | alpine             | sha256:2b70e4 | 18MB   |
| registry.k8s.io/pause                   | 3.9                | sha256:e6f181 | 322kB  |
| registry.k8s.io/etcd                    | 3.5.9-0            | sha256:73deb9 | 103MB  |
| registry.k8s.io/kube-apiserver          | v1.28.4            | sha256:7fe0e6 | 34.7MB |
| docker.io/library/nginx                 | latest             | sha256:a87587 | 70.5MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| registry.k8s.io/echoserver              | 1.8                | sha256:82e4c8 | 46.2MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-259653 image ls --format table --alsologtostderr:
I0131 14:18:15.198735  168340 out.go:296] Setting OutFile to fd 1 ...
I0131 14:18:15.198918  168340 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0131 14:18:15.198931  168340 out.go:309] Setting ErrFile to fd 2...
I0131 14:18:15.198938  168340 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0131 14:18:15.199135  168340 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-117277/.minikube/bin
I0131 14:18:15.199729  168340 config.go:182] Loaded profile config "functional-259653": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0131 14:18:15.199838  168340 config.go:182] Loaded profile config "functional-259653": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0131 14:18:15.200271  168340 cli_runner.go:164] Run: docker container inspect functional-259653 --format={{.State.Status}}
I0131 14:18:15.217894  168340 ssh_runner.go:195] Run: systemctl --version
I0131 14:18:15.217959  168340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-259653
I0131 14:18:15.235833  168340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18007-117277/.minikube/machines/functional-259653/id_rsa Username:docker}
I0131 14:18:15.330545  168340 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-259653 image ls --format json --alsologtostderr:
[{"id":"sha256:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"27737299"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"34683820"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:350b164e7
ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"102894559"},{"id":"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"19746404"},{"id":"sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb"],"repoTags":["docker.io/library/mysql:5.7"],"size":"137909886"},{"id":"sha256:2b70e4aaac6b5370bf3a556f5e13156692351696dd5d7c5530d117aa21772748","repoDigests":["docker.io/library/nginx@sha256:156d75f07
c59b2fd59d3d1470631777943bb574135214f0a90c7bb82bde916da"],"repoTags":["docker.io/library/nginx:alpine"],"size":"17979980"},{"id":"sha256:a8758716bb6aa4d90071160d27028fe4eaee7ce8166221a97d30440c8eac2be6","repoDigests":["docker.io/library/nginx@sha256:4c0fdaa8b6341bfdeca5f18f7837462c80cff90527ee35ef185571e1c327beac"],"repoTags":["docker.io/library/nginx:latest"],"size":"70520324"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"16190758"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":
["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"33420443"},{"id":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":["registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"24581402"},{"id":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"18834488"},{"id":"sha256:da86e6ba6c
a197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"321520"},{"id":"sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"75788960"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-259653 image ls --format json --alsologtostderr:
I0131 14:18:14.973314  168262 out.go:296] Setting OutFile to fd 1 ...
I0131 14:18:14.973545  168262 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0131 14:18:14.973557  168262 out.go:309] Setting ErrFile to fd 2...
I0131 14:18:14.973562  168262 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0131 14:18:14.973790  168262 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-117277/.minikube/bin
I0131 14:18:14.974452  168262 config.go:182] Loaded profile config "functional-259653": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0131 14:18:14.974578  168262 config.go:182] Loaded profile config "functional-259653": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0131 14:18:14.975072  168262 cli_runner.go:164] Run: docker container inspect functional-259653 --format={{.State.Status}}
I0131 14:18:14.993373  168262 ssh_runner.go:195] Run: systemctl --version
I0131 14:18:14.993429  168262 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-259653
I0131 14:18:15.010485  168262 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18007-117277/.minikube/machines/functional-259653/id_rsa Username:docker}
I0131 14:18:15.102491  168262 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-259653 image ls --format yaml --alsologtostderr:
- id: sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "33420443"
- id: sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "75788960"
- id: sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "16190758"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "19746404"
- id: sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
repoTags:
- docker.io/library/mysql:5.7
size: "137909886"
- id: sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "24581402"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "321520"
- id: sha256:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "27737299"
- id: sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "34683820"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "102894559"
- id: sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "18834488"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:2b70e4aaac6b5370bf3a556f5e13156692351696dd5d7c5530d117aa21772748
repoDigests:
- docker.io/library/nginx@sha256:156d75f07c59b2fd59d3d1470631777943bb574135214f0a90c7bb82bde916da
repoTags:
- docker.io/library/nginx:alpine
size: "17979980"
- id: sha256:a8758716bb6aa4d90071160d27028fe4eaee7ce8166221a97d30440c8eac2be6
repoDigests:
- docker.io/library/nginx@sha256:4c0fdaa8b6341bfdeca5f18f7837462c80cff90527ee35ef185571e1c327beac
repoTags:
- docker.io/library/nginx:latest
size: "70520324"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-259653 image ls --format yaml --alsologtostderr:
I0131 14:18:15.220658  168350 out.go:296] Setting OutFile to fd 1 ...
I0131 14:18:15.220944  168350 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0131 14:18:15.220956  168350 out.go:309] Setting ErrFile to fd 2...
I0131 14:18:15.220960  168350 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0131 14:18:15.221201  168350 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-117277/.minikube/bin
I0131 14:18:15.221944  168350 config.go:182] Loaded profile config "functional-259653": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0131 14:18:15.222062  168350 config.go:182] Loaded profile config "functional-259653": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0131 14:18:15.222550  168350 cli_runner.go:164] Run: docker container inspect functional-259653 --format={{.State.Status}}
I0131 14:18:15.239821  168350 ssh_runner.go:195] Run: systemctl --version
I0131 14:18:15.239872  168350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-259653
I0131 14:18:15.258204  168350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18007-117277/.minikube/machines/functional-259653/id_rsa Username:docker}
I0131 14:18:15.351018  168350 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-259653 ssh pgrep buildkitd: exit status 1 (279.930888ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 image build -t localhost/my-image:functional-259653 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-259653 image build -t localhost/my-image:functional-259653 testdata/build --alsologtostderr: (2.084249749s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-259653 image build -t localhost/my-image:functional-259653 testdata/build --alsologtostderr:
I0131 14:18:15.721851  168514 out.go:296] Setting OutFile to fd 1 ...
I0131 14:18:15.722190  168514 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0131 14:18:15.722203  168514 out.go:309] Setting ErrFile to fd 2...
I0131 14:18:15.722208  168514 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0131 14:18:15.722455  168514 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-117277/.minikube/bin
I0131 14:18:15.723277  168514 config.go:182] Loaded profile config "functional-259653": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0131 14:18:15.723898  168514 config.go:182] Loaded profile config "functional-259653": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0131 14:18:15.724418  168514 cli_runner.go:164] Run: docker container inspect functional-259653 --format={{.State.Status}}
I0131 14:18:15.744883  168514 ssh_runner.go:195] Run: systemctl --version
I0131 14:18:15.744950  168514 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-259653
I0131 14:18:15.763198  168514 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18007-117277/.minikube/machines/functional-259653/id_rsa Username:docker}
I0131 14:18:15.854355  168514 build_images.go:151] Building image from path: /tmp/build.3471927794.tar
I0131 14:18:15.854427  168514 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0131 14:18:15.863983  168514 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3471927794.tar
I0131 14:18:15.867504  168514 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3471927794.tar: stat -c "%s %y" /var/lib/minikube/build/build.3471927794.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3471927794.tar': No such file or directory
I0131 14:18:15.867553  168514 ssh_runner.go:362] scp /tmp/build.3471927794.tar --> /var/lib/minikube/build/build.3471927794.tar (3072 bytes)
I0131 14:18:15.892890  168514 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3471927794
I0131 14:18:15.902776  168514 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3471927794 -xf /var/lib/minikube/build/build.3471927794.tar
I0131 14:18:15.913766  168514 containerd.go:379] Building image: /var/lib/minikube/build/build.3471927794
I0131 14:18:15.913867  168514 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3471927794 --local dockerfile=/var/lib/minikube/build/build.3471927794 --output type=image,name=localhost/my-image:functional-259653
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.4s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.1s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.2s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.9s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:4ed1f4d4632a6175a48a8c85d6a8d79bdeea0475773840ba446b6b894a32e69e 0.0s done
#8 exporting config sha256:451bdabc4e4fd617f51148c5fadaa32468f26ed771b23bc1d4d39caab2e8a019 done
#8 naming to localhost/my-image:functional-259653 done
#8 DONE 0.1s
I0131 14:18:17.714777  168514 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3471927794 --local dockerfile=/var/lib/minikube/build/build.3471927794 --output type=image,name=localhost/my-image:functional-259653: (1.800873982s)
I0131 14:18:17.714841  168514 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3471927794
I0131 14:18:17.724784  168514 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3471927794.tar
I0131 14:18:17.733575  168514 build_images.go:207] Built localhost/my-image:functional-259653 from /tmp/build.3471927794.tar
I0131 14:18:17.733611  168514 build_images.go:123] succeeded building to: functional-259653
I0131 14:18:17.733617  168514 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 image ls
2024/01/31 14:18:19 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.00090249s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-259653
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-259653 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-259653 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-259653 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 162475: os: process already finished
helpers_test.go:502: unable to terminate pid 162289: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-259653 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-259653 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (15.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-259653 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [d031d6e5-f8dd-4d23-bb1c-d8b41729e59f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [d031d6e5-f8dd-4d23-bb1c-d8b41729e59f] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 15.003649861s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (15.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 image rm gcr.io/google-containers/addon-resizer:functional-259653 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-259653 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.97.101.136 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-259653 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-259653 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-259653 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-psgbx" [990c057f-b223-440f-b287-89586b8a8f94] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-psgbx" [990c057f-b223-440f-b287-89586b8a8f94] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.003882777s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-259653 /tmp/TestFunctionalparallelMountCmdany-port4212796583/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1706710682567588953" to /tmp/TestFunctionalparallelMountCmdany-port4212796583/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1706710682567588953" to /tmp/TestFunctionalparallelMountCmdany-port4212796583/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1706710682567588953" to /tmp/TestFunctionalparallelMountCmdany-port4212796583/001/test-1706710682567588953
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-259653 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (293.688486ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 31 14:18 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 31 14:18 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 31 14:18 test-1706710682567588953
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 ssh cat /mount-9p/test-1706710682567588953
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-259653 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [5f7e54f4-83a6-4435-95f3-3329db48f023] Pending
helpers_test.go:344: "busybox-mount" [5f7e54f4-83a6-4435-95f3-3329db48f023] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [5f7e54f4-83a6-4435-95f3-3329db48f023] Running
helpers_test.go:344: "busybox-mount" [5f7e54f4-83a6-4435-95f3-3329db48f023] Running: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [5f7e54f4-83a6-4435-95f3-3329db48f023] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.024116617s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-259653 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-259653 /tmp/TestFunctionalparallelMountCmdany-port4212796583/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.78s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "307.058387ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "71.67161ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "322.113445ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "73.199274ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 service list
functional_test.go:1455: (dbg) Done: out/minikube-linux-amd64 -p functional-259653 service list: (1.747293994s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 service list -o json
functional_test.go:1485: (dbg) Done: out/minikube-linux-amd64 -p functional-259653 service list -o json: (1.758646405s)
functional_test.go:1490: Took "1.75879575s" to run "out/minikube-linux-amd64 -p functional-259653 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-259653 /tmp/TestFunctionalparallelMountCmdspecific-port341538712/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-259653 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (315.331434ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-259653 /tmp/TestFunctionalparallelMountCmdspecific-port341538712/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-259653 ssh "sudo umount -f /mount-9p": exit status 1 (369.672033ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-259653 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-259653 /tmp/TestFunctionalparallelMountCmdspecific-port341538712/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.49.2:31556
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-259653 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1012102365/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-259653 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1012102365/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-259653 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1012102365/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-259653 ssh "findmnt -T" /mount1: exit status 1 (489.840951ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-259653 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-259653 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1012102365/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-259653 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1012102365/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-259653 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1012102365/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-259653 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.49.2:31556
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.62s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-259653
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-259653
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-259653
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (69.16s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-378599 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E0131 14:18:28.842237  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/addons-214491/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-378599 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (1m9.157389094s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (69.16s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (8.81s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-378599 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-378599 addons enable ingress --alsologtostderr -v=5: (8.813047826s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (8.81s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.59s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-378599 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.59s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (35.37s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:207: (dbg) Run:  kubectl --context ingress-addon-legacy-378599 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
E0131 14:19:50.762700  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/addons-214491/client.crt: no such file or directory
addons_test.go:207: (dbg) Done: kubectl --context ingress-addon-legacy-378599 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (12.671234483s)
addons_test.go:232: (dbg) Run:  kubectl --context ingress-addon-legacy-378599 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context ingress-addon-legacy-378599 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [42e4979d-f869-4b5b-9420-f992429d70a4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [42e4979d-f869-4b5b-9420-f992429d70a4] Running
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 8.003630049s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-378599 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context ingress-addon-legacy-378599 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-378599 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-378599 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-378599 addons disable ingress-dns --alsologtostderr -v=1: (6.06817309s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-378599 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-378599 addons disable ingress --alsologtostderr -v=1: (7.453963639s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (35.37s)

                                                
                                    
x
+
TestJSONOutput/start/Command (46.93s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-711827 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-711827 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (46.92777046s)
--- PASS: TestJSONOutput/start/Command (46.93s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-711827 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-711827 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.78s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-711827 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-711827 --output=json --user=testUser: (5.776994339s)
--- PASS: TestJSONOutput/stop/Command (5.78s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-371987 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-371987 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (85.177242ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4bbaf7f6-57e3-4d4c-9c2e-e84424876af2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-371987] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"99a7907d-59e4-465b-b3da-66a734899c23","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18007"}}
	{"specversion":"1.0","id":"146b5435-4e36-4e4c-bb97-d76d2642909e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"52065ade-5207-48f7-a76f-080ba0d99730","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18007-117277/kubeconfig"}}
	{"specversion":"1.0","id":"185d0d8b-d31c-4e99-b85b-1871a5689430","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18007-117277/.minikube"}}
	{"specversion":"1.0","id":"7d95e728-c5e5-4281-9875-89635475aaa7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"d881aa35-ce14-4ce5-87e1-1ba52ebe15d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"64cca265-b6c2-4e4d-8866-8383f878029b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-371987" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-371987
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (33.6s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-038157 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-038157 --network=: (31.401663209s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-038157" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-038157
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-038157: (2.175961885s)
--- PASS: TestKicCustomNetwork/create_custom_network (33.60s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (29.02s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-141379 --network=bridge
E0131 14:22:06.918401  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/addons-214491/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-141379 --network=bridge: (27.043600161s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-141379" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-141379
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-141379: (1.957426109s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (29.02s)

                                                
                                    
x
+
TestKicExistingNetwork (28.54s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-743036 --network=existing-network
E0131 14:22:34.604643  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/addons-214491/client.crt: no such file or directory
E0131 14:22:42.335143  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/functional-259653/client.crt: no such file or directory
E0131 14:22:42.340438  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/functional-259653/client.crt: no such file or directory
E0131 14:22:42.350604  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/functional-259653/client.crt: no such file or directory
E0131 14:22:42.370922  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/functional-259653/client.crt: no such file or directory
E0131 14:22:42.411330  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/functional-259653/client.crt: no such file or directory
E0131 14:22:42.491677  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/functional-259653/client.crt: no such file or directory
E0131 14:22:42.652185  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/functional-259653/client.crt: no such file or directory
E0131 14:22:42.972780  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/functional-259653/client.crt: no such file or directory
E0131 14:22:43.613755  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/functional-259653/client.crt: no such file or directory
E0131 14:22:44.894708  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/functional-259653/client.crt: no such file or directory
E0131 14:22:47.455836  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/functional-259653/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-743036 --network=existing-network: (26.399197172s)
helpers_test.go:175: Cleaning up "existing-network-743036" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-743036
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-743036: (1.988843587s)
--- PASS: TestKicExistingNetwork (28.54s)

                                                
                                    
x
+
TestKicCustomSubnet (27.8s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-853367 --subnet=192.168.60.0/24
E0131 14:22:52.576308  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/functional-259653/client.crt: no such file or directory
E0131 14:23:02.816998  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/functional-259653/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-853367 --subnet=192.168.60.0/24: (25.650734252s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-853367 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-853367" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-853367
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-853367: (2.131387088s)
--- PASS: TestKicCustomSubnet (27.80s)

                                                
                                    
x
+
TestKicStaticIP (25.09s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-583284 --static-ip=192.168.200.200
E0131 14:23:23.297265  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/functional-259653/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-583284 --static-ip=192.168.200.200: (22.883490316s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-583284 ip
helpers_test.go:175: Cleaning up "static-ip-583284" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-583284
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-583284: (2.060987885s)
--- PASS: TestKicStaticIP (25.09s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (52.17s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-318401 --driver=docker  --container-runtime=containerd
E0131 14:24:04.257659  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/functional-259653/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-318401 --driver=docker  --container-runtime=containerd: (24.412332402s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-320697 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-320697 --driver=docker  --container-runtime=containerd: (22.46632586s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-318401
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-320697
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-320697" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-320697
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-320697: (1.921514526s)
helpers_test.go:175: Cleaning up "first-318401" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-318401
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-318401: (2.257771402s)
--- PASS: TestMinikubeProfile (52.17s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.83s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-713253 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
E0131 14:24:42.542619  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/ingress-addon-legacy-378599/client.crt: no such file or directory
E0131 14:24:42.547938  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/ingress-addon-legacy-378599/client.crt: no such file or directory
E0131 14:24:42.558224  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/ingress-addon-legacy-378599/client.crt: no such file or directory
E0131 14:24:42.578511  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/ingress-addon-legacy-378599/client.crt: no such file or directory
E0131 14:24:42.618806  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/ingress-addon-legacy-378599/client.crt: no such file or directory
E0131 14:24:42.699137  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/ingress-addon-legacy-378599/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-713253 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.832913158s)
E0131 14:24:42.860007  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/ingress-addon-legacy-378599/client.crt: no such file or directory
E0131 14:24:43.180627  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/ingress-addon-legacy-378599/client.crt: no such file or directory
--- PASS: TestMountStart/serial/StartWithMountFirst (7.83s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-713253 ssh -- ls /minikube-host
E0131 14:24:43.820818  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/ingress-addon-legacy-378599/client.crt: no such file or directory
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.94s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-728637 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
E0131 14:24:45.101493  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/ingress-addon-legacy-378599/client.crt: no such file or directory
E0131 14:24:47.662650  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/ingress-addon-legacy-378599/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-728637 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.941446699s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.94s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-728637 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-713253 --alsologtostderr -v=5
E0131 14:24:52.783608  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/ingress-addon-legacy-378599/client.crt: no such file or directory
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-713253 --alsologtostderr -v=5: (1.628024022s)
--- PASS: TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-728637 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-728637
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-728637: (1.196592428s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (6.87s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-728637
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-728637: (5.867427227s)
--- PASS: TestMountStart/serial/RestartStopped (6.87s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-728637 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (72.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-555456 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0131 14:25:23.504555  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/ingress-addon-legacy-378599/client.crt: no such file or directory
E0131 14:25:26.178214  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/functional-259653/client.crt: no such file or directory
E0131 14:26:04.465791  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/ingress-addon-legacy-378599/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p multinode-555456 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m12.289108607s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555456 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (72.78s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-555456 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-555456 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-555456 -- rollout status deployment/busybox: (1.838682914s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-555456 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-555456 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-555456 -- exec busybox-5b5d89c9d6-ddhm9 -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-555456 -- exec busybox-5b5d89c9d6-g6qrk -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-555456 -- exec busybox-5b5d89c9d6-ddhm9 -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-555456 -- exec busybox-5b5d89c9d6-g6qrk -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-555456 -- exec busybox-5b5d89c9d6-ddhm9 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-555456 -- exec busybox-5b5d89c9d6-g6qrk -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.53s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-555456 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-555456 -- exec busybox-5b5d89c9d6-ddhm9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-555456 -- exec busybox-5b5d89c9d6-ddhm9 -- sh -c "ping -c 1 127.0.0.1"
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-555456 -- exec busybox-5b5d89c9d6-g6qrk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-555456 -- exec busybox-5b5d89c9d6-g6qrk -- sh -c "ping -c 1 127.0.0.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.86s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (15.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-555456 -v 3 --alsologtostderr
multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-555456 -v 3 --alsologtostderr: (14.89976316s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555456 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (15.51s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-555456 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.30s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555456 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555456 cp testdata/cp-test.txt multinode-555456:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555456 ssh -n multinode-555456 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555456 cp multinode-555456:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3885462144/001/cp-test_multinode-555456.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555456 ssh -n multinode-555456 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555456 cp multinode-555456:/home/docker/cp-test.txt multinode-555456-m02:/home/docker/cp-test_multinode-555456_multinode-555456-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555456 ssh -n multinode-555456 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555456 ssh -n multinode-555456-m02 "sudo cat /home/docker/cp-test_multinode-555456_multinode-555456-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555456 cp multinode-555456:/home/docker/cp-test.txt multinode-555456-m03:/home/docker/cp-test_multinode-555456_multinode-555456-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555456 ssh -n multinode-555456 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555456 ssh -n multinode-555456-m03 "sudo cat /home/docker/cp-test_multinode-555456_multinode-555456-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555456 cp testdata/cp-test.txt multinode-555456-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555456 ssh -n multinode-555456-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555456 cp multinode-555456-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3885462144/001/cp-test_multinode-555456-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555456 ssh -n multinode-555456-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555456 cp multinode-555456-m02:/home/docker/cp-test.txt multinode-555456:/home/docker/cp-test_multinode-555456-m02_multinode-555456.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555456 ssh -n multinode-555456-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555456 ssh -n multinode-555456 "sudo cat /home/docker/cp-test_multinode-555456-m02_multinode-555456.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555456 cp multinode-555456-m02:/home/docker/cp-test.txt multinode-555456-m03:/home/docker/cp-test_multinode-555456-m02_multinode-555456-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555456 ssh -n multinode-555456-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555456 ssh -n multinode-555456-m03 "sudo cat /home/docker/cp-test_multinode-555456-m02_multinode-555456-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555456 cp testdata/cp-test.txt multinode-555456-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555456 ssh -n multinode-555456-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555456 cp multinode-555456-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3885462144/001/cp-test_multinode-555456-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555456 ssh -n multinode-555456-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555456 cp multinode-555456-m03:/home/docker/cp-test.txt multinode-555456:/home/docker/cp-test_multinode-555456-m03_multinode-555456.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555456 ssh -n multinode-555456-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555456 ssh -n multinode-555456 "sudo cat /home/docker/cp-test_multinode-555456-m03_multinode-555456.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555456 cp multinode-555456-m03:/home/docker/cp-test.txt multinode-555456-m02:/home/docker/cp-test_multinode-555456-m03_multinode-555456-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555456 ssh -n multinode-555456-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555456 ssh -n multinode-555456-m02 "sudo cat /home/docker/cp-test_multinode-555456-m03_multinode-555456-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.92s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555456 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-amd64 -p multinode-555456 node stop m03: (1.196302906s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555456 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-555456 status: exit status 7 (484.427426ms)

                                                
                                                
-- stdout --
	multinode-555456
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-555456-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-555456-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555456 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-555456 status --alsologtostderr: exit status 7 (474.050071ms)

                                                
                                                
-- stdout --
	multinode-555456
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-555456-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-555456-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0131 14:26:48.945493  225705 out.go:296] Setting OutFile to fd 1 ...
	I0131 14:26:48.945668  225705 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 14:26:48.945681  225705 out.go:309] Setting ErrFile to fd 2...
	I0131 14:26:48.945688  225705 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 14:26:48.945899  225705 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-117277/.minikube/bin
	I0131 14:26:48.946129  225705 out.go:303] Setting JSON to false
	I0131 14:26:48.946164  225705 mustload.go:65] Loading cluster: multinode-555456
	I0131 14:26:48.946249  225705 notify.go:220] Checking for updates...
	I0131 14:26:48.946624  225705 config.go:182] Loaded profile config "multinode-555456": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0131 14:26:48.946641  225705 status.go:255] checking status of multinode-555456 ...
	I0131 14:26:48.947106  225705 cli_runner.go:164] Run: docker container inspect multinode-555456 --format={{.State.Status}}
	I0131 14:26:48.964122  225705 status.go:330] multinode-555456 host status = "Running" (err=<nil>)
	I0131 14:26:48.964145  225705 host.go:66] Checking if "multinode-555456" exists ...
	I0131 14:26:48.964378  225705 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-555456
	I0131 14:26:48.980190  225705 host.go:66] Checking if "multinode-555456" exists ...
	I0131 14:26:48.980427  225705 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0131 14:26:48.980480  225705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-555456
	I0131 14:26:48.996829  225705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/18007-117277/.minikube/machines/multinode-555456/id_rsa Username:docker}
	I0131 14:26:49.086147  225705 ssh_runner.go:195] Run: systemctl --version
	I0131 14:26:49.090464  225705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 14:26:49.100516  225705 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0131 14:26:49.152098  225705 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:68 SystemTime:2024-01-31 14:26:49.143215734 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:25.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0131 14:26:49.152703  225705 kubeconfig.go:92] found "multinode-555456" server: "https://192.168.58.2:8443"
	I0131 14:26:49.152736  225705 api_server.go:166] Checking apiserver status ...
	I0131 14:26:49.152777  225705 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 14:26:49.163045  225705 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1455/cgroup
	I0131 14:26:49.171567  225705 api_server.go:182] apiserver freezer: "9:freezer:/docker/209dfe5630183bdf64f7a0532d93f53e4a627f5d4189b8f23715ee5a9050410d/kubepods/burstable/podecac47e954ce29a9bfa5be8324bfa17b/2e56732bb4ca94944d1ce4b071433e083fe0c3e3a28168f235587e1831feecac"
	I0131 14:26:49.171621  225705 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/209dfe5630183bdf64f7a0532d93f53e4a627f5d4189b8f23715ee5a9050410d/kubepods/burstable/podecac47e954ce29a9bfa5be8324bfa17b/2e56732bb4ca94944d1ce4b071433e083fe0c3e3a28168f235587e1831feecac/freezer.state
	I0131 14:26:49.179149  225705 api_server.go:204] freezer state: "THAWED"
	I0131 14:26:49.179173  225705 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0131 14:26:49.183294  225705 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0131 14:26:49.183313  225705 status.go:421] multinode-555456 apiserver status = Running (err=<nil>)
	I0131 14:26:49.183322  225705 status.go:257] multinode-555456 status: &{Name:multinode-555456 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0131 14:26:49.183336  225705 status.go:255] checking status of multinode-555456-m02 ...
	I0131 14:26:49.183567  225705 cli_runner.go:164] Run: docker container inspect multinode-555456-m02 --format={{.State.Status}}
	I0131 14:26:49.199742  225705 status.go:330] multinode-555456-m02 host status = "Running" (err=<nil>)
	I0131 14:26:49.199764  225705 host.go:66] Checking if "multinode-555456-m02" exists ...
	I0131 14:26:49.200008  225705 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-555456-m02
	I0131 14:26:49.216995  225705 host.go:66] Checking if "multinode-555456-m02" exists ...
	I0131 14:26:49.217297  225705 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0131 14:26:49.217345  225705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-555456-m02
	I0131 14:26:49.234163  225705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/18007-117277/.minikube/machines/multinode-555456-m02/id_rsa Username:docker}
	I0131 14:26:49.327464  225705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 14:26:49.339859  225705 status.go:257] multinode-555456-m02 status: &{Name:multinode-555456-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0131 14:26:49.339923  225705 status.go:255] checking status of multinode-555456-m03 ...
	I0131 14:26:49.340227  225705 cli_runner.go:164] Run: docker container inspect multinode-555456-m03 --format={{.State.Status}}
	I0131 14:26:49.358009  225705 status.go:330] multinode-555456-m03 host status = "Stopped" (err=<nil>)
	I0131 14:26:49.358040  225705 status.go:343] host is not running, skipping remaining checks
	I0131 14:26:49.358054  225705 status.go:257] multinode-555456-m03 status: &{Name:multinode-555456-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.16s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (11.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:272: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555456 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-555456 node start m03 --alsologtostderr: (10.410379624s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555456 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (11.15s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (115.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-555456
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-555456
E0131 14:27:06.919343  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/addons-214491/client.crt: no such file or directory
multinode_test.go:318: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-555456: (24.804802367s)
multinode_test.go:323: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-555456 --wait=true -v=8 --alsologtostderr
E0131 14:27:26.386726  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/ingress-addon-legacy-378599/client.crt: no such file or directory
E0131 14:27:42.335556  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/functional-259653/client.crt: no such file or directory
E0131 14:28:10.018566  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/functional-259653/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-amd64 start -p multinode-555456 --wait=true -v=8 --alsologtostderr: (1m30.757489063s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-555456
--- PASS: TestMultiNode/serial/RestartKeepsNodes (115.69s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555456 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p multinode-555456 node delete m03: (4.127489644s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555456 status --alsologtostderr
multinode_test.go:442: (dbg) Run:  docker volume ls
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.74s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555456 stop
multinode_test.go:342: (dbg) Done: out/minikube-linux-amd64 -p multinode-555456 stop: (23.616237705s)
multinode_test.go:348: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555456 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-555456 status: exit status 7 (103.323ms)

                                                
                                                
-- stdout --
	multinode-555456
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-555456-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555456 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-555456 status --alsologtostderr: exit status 7 (108.390466ms)

                                                
                                                
-- stdout --
	multinode-555456
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-555456-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0131 14:29:24.728426  236269 out.go:296] Setting OutFile to fd 1 ...
	I0131 14:29:24.728605  236269 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 14:29:24.728616  236269 out.go:309] Setting ErrFile to fd 2...
	I0131 14:29:24.728620  236269 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 14:29:24.728869  236269 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-117277/.minikube/bin
	I0131 14:29:24.729076  236269 out.go:303] Setting JSON to false
	I0131 14:29:24.729117  236269 mustload.go:65] Loading cluster: multinode-555456
	I0131 14:29:24.729266  236269 notify.go:220] Checking for updates...
	I0131 14:29:24.729665  236269 config.go:182] Loaded profile config "multinode-555456": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0131 14:29:24.729685  236269 status.go:255] checking status of multinode-555456 ...
	I0131 14:29:24.730145  236269 cli_runner.go:164] Run: docker container inspect multinode-555456 --format={{.State.Status}}
	I0131 14:29:24.749916  236269 status.go:330] multinode-555456 host status = "Stopped" (err=<nil>)
	I0131 14:29:24.749949  236269 status.go:343] host is not running, skipping remaining checks
	I0131 14:29:24.749955  236269 status.go:257] multinode-555456 status: &{Name:multinode-555456 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0131 14:29:24.749996  236269 status.go:255] checking status of multinode-555456-m02 ...
	I0131 14:29:24.750329  236269 cli_runner.go:164] Run: docker container inspect multinode-555456-m02 --format={{.State.Status}}
	I0131 14:29:24.769866  236269 status.go:330] multinode-555456-m02 host status = "Stopped" (err=<nil>)
	I0131 14:29:24.769930  236269 status.go:343] host is not running, skipping remaining checks
	I0131 14:29:24.769941  236269 status.go:257] multinode-555456-m02 status: &{Name:multinode-555456-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.83s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (79.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:372: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-555456 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0131 14:29:42.543041  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/ingress-addon-legacy-378599/client.crt: no such file or directory
E0131 14:30:10.227725  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/ingress-addon-legacy-378599/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-linux-amd64 start -p multinode-555456 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m18.578186481s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p multinode-555456 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (79.22s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (27.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-555456
multinode_test.go:480: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-555456-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-555456-m02 --driver=docker  --container-runtime=containerd: exit status 14 (85.148421ms)

                                                
                                                
-- stdout --
	* [multinode-555456-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18007
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18007-117277/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18007-117277/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-555456-m02' is duplicated with machine name 'multinode-555456-m02' in profile 'multinode-555456'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-555456-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:488: (dbg) Done: out/minikube-linux-amd64 start -p multinode-555456-m03 --driver=docker  --container-runtime=containerd: (24.82220092s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-555456
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-555456: exit status 80 (311.399761ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-555456
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-555456-m03 already exists in multinode-555456-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-555456-m03
multinode_test.go:500: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-555456-m03: (1.92314965s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (27.21s)

                                                
                                    
x
+
TestPreload (116.33s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-577291 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
E0131 14:32:06.918619  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/addons-214491/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-577291 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m1.78674586s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-577291 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-577291 image pull gcr.io/k8s-minikube/busybox: (1.195164331s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-577291
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-577291: (5.73878293s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-577291 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
E0131 14:32:42.335648  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/functional-259653/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-577291 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (44.978331117s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-577291 image list
helpers_test.go:175: Cleaning up "test-preload-577291" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-577291
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-577291: (2.390243418s)
--- PASS: TestPreload (116.33s)

                                                
                                    
x
+
TestScheduledStopUnix (97.42s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-051252 --memory=2048 --driver=docker  --container-runtime=containerd
E0131 14:33:29.965727  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/addons-214491/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-051252 --memory=2048 --driver=docker  --container-runtime=containerd: (21.291034671s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-051252 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-051252 -n scheduled-stop-051252
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-051252 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-051252 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-051252 -n scheduled-stop-051252
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-051252
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-051252 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0131 14:34:42.542173  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/ingress-addon-legacy-378599/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-051252
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-051252: exit status 7 (84.535278ms)

                                                
                                                
-- stdout --
	scheduled-stop-051252
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-051252 -n scheduled-stop-051252
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-051252 -n scheduled-stop-051252: exit status 7 (95.751141ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-051252" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-051252
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-051252: (4.558317757s)
--- PASS: TestScheduledStopUnix (97.42s)

                                                
                                    
x
+
TestInsufficientStorage (13.24s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-138690 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-138690 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (10.770311329s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4d7f1028-a857-4e9a-a260-188033ea2e8e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-138690] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3079e1bb-1380-48f1-a1ee-e2fc63c44407","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18007"}}
	{"specversion":"1.0","id":"9f167c13-adab-4d0b-9a0b-34390acca03b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"80e91797-24dc-428a-9da6-8350050d1cf9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18007-117277/kubeconfig"}}
	{"specversion":"1.0","id":"db779b89-128a-4a3d-9640-2ad93b22fe73","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18007-117277/.minikube"}}
	{"specversion":"1.0","id":"c2072b0f-b913-4d31-b27c-1bbceafedf1b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"43cc73c6-563e-4549-abc6-57e87f356ef7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"833e0b0b-c7ef-4f6a-8d72-885b031baec9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"ab2519ea-19a8-4a79-b49a-658101b40354","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"1fa463c0-866b-48e3-93b7-d7b5647057fd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"45a1f7b4-1659-453f-9c2a-726c4a56237d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"d8a45218-9dac-4812-842f-2c9784cdab1a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-138690 in cluster insufficient-storage-138690","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"0c10b03b-d8cc-41a9-a5b1-eaa86bd85914","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1704759386-17866 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"c471062a-cf42-4f41-98f6-9ab3424f2b6d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"f3e7727c-4554-4bfa-9d2e-85a11d318aa6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-138690 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-138690 --output=json --layout=cluster: exit status 7 (291.085644ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-138690","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-138690","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0131 14:34:59.909952  257658 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-138690" does not appear in /home/jenkins/minikube-integration/18007-117277/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-138690 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-138690 --output=json --layout=cluster: exit status 7 (288.94376ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-138690","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-138690","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0131 14:35:00.200855  257746 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-138690" does not appear in /home/jenkins/minikube-integration/18007-117277/kubeconfig
	E0131 14:35:00.210802  257746 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/insufficient-storage-138690/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-138690" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-138690
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-138690: (1.884290212s)
--- PASS: TestInsufficientStorage (13.24s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (68.65s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2563657073 start -p running-upgrade-577822 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2563657073 start -p running-upgrade-577822 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (40.703551684s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-577822 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-577822 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (25.095345202s)
helpers_test.go:175: Cleaning up "running-upgrade-577822" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-577822
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-577822: (2.310173507s)
--- PASS: TestRunningBinaryUpgrade (68.65s)

                                                
                                    
x
+
TestKubernetesUpgrade (349.48s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-214992 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0131 14:37:42.335242  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/functional-259653/client.crt: no such file or directory
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-214992 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (47.589103702s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-214992
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-214992: (1.223922455s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-214992 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-214992 status --format={{.Host}}: exit status 7 (90.233831ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-214992 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-214992 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m35.614621248s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-214992 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-214992 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-214992 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=containerd: exit status 106 (113.883136ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-214992] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18007
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18007-117277/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18007-117277/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-214992
	    minikube start -p kubernetes-upgrade-214992 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2149922 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-214992 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-214992 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-214992 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (22.275679024s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-214992" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-214992
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-214992: (2.491779501s)
--- PASS: TestKubernetesUpgrade (349.48s)

                                                
                                    
x
+
TestMissingContainerUpgrade (137.78s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.1935175226 start -p missing-upgrade-096524 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.1935175226 start -p missing-upgrade-096524 --memory=2200 --driver=docker  --container-runtime=containerd: (1m0.061545812s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-096524
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-096524: (10.444109403s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-096524
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-096524 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-096524 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m4.396198052s)
helpers_test.go:175: Cleaning up "missing-upgrade-096524" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-096524
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-096524: (2.268094594s)
--- PASS: TestMissingContainerUpgrade (137.78s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.54s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-053734 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-053734 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (93.736399ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-053734] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18007
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18007-117277/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18007-117277/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (36.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-053734 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-053734 --driver=docker  --container-runtime=containerd: (35.721290685s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-053734 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (36.16s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (184.24s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2779180578 start -p stopped-upgrade-085959 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2779180578 start -p stopped-upgrade-085959 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (1m12.688759249s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2779180578 -p stopped-upgrade-085959 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2779180578 -p stopped-upgrade-085959 stop: (22.942186658s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-085959 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-085959 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m28.611956597s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (184.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-053734 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-053734 --no-kubernetes --driver=docker  --container-runtime=containerd: (13.83765238s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-053734 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-053734 status -o json: exit status 2 (319.974825ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-053734","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-053734
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-053734: (1.941661544s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-053734 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-053734 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.349567115s)
--- PASS: TestNoKubernetes/serial/Start (7.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-053734 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-053734 "sudo systemctl is-active --quiet service kubelet": exit status 1 (314.421615ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (8.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (8.12756687s)
--- PASS: TestNoKubernetes/serial/ProfileList (8.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-381612 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-381612 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (155.277339ms)

                                                
                                                
-- stdout --
	* [false-381612] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18007
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18007-117277/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18007-117277/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0131 14:36:08.962219  272633 out.go:296] Setting OutFile to fd 1 ...
	I0131 14:36:08.962455  272633 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 14:36:08.962464  272633 out.go:309] Setting ErrFile to fd 2...
	I0131 14:36:08.962469  272633 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 14:36:08.962672  272633 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18007-117277/.minikube/bin
	I0131 14:36:08.963263  272633 out.go:303] Setting JSON to false
	I0131 14:36:08.964390  272633 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":69521,"bootTime":1706642248,"procs":258,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0131 14:36:08.964448  272633 start.go:138] virtualization: kvm guest
	I0131 14:36:08.966758  272633 out.go:177] * [false-381612] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0131 14:36:08.968128  272633 out.go:177]   - MINIKUBE_LOCATION=18007
	I0131 14:36:08.968136  272633 notify.go:220] Checking for updates...
	I0131 14:36:08.969588  272633 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0131 14:36:08.970936  272633 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18007-117277/kubeconfig
	I0131 14:36:08.972206  272633 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18007-117277/.minikube
	I0131 14:36:08.973399  272633 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0131 14:36:08.974655  272633 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0131 14:36:08.976440  272633 config.go:182] Loaded profile config "NoKubernetes-053734": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v0.0.0
	I0131 14:36:08.976577  272633 config.go:182] Loaded profile config "missing-upgrade-096524": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.1
	I0131 14:36:08.976695  272633 config.go:182] Loaded profile config "stopped-upgrade-085959": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.1
	I0131 14:36:08.976815  272633 driver.go:392] Setting default libvirt URI to qemu:///system
	I0131 14:36:08.999258  272633 docker.go:122] docker version: linux-25.0.1:Docker Engine - Community
	I0131 14:36:08.999412  272633 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0131 14:36:09.050156  272633 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:83 SystemTime:2024-01-31 14:36:09.040732949 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:25.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0131 14:36:09.050682  272633 docker.go:295] overlay module found
	I0131 14:36:09.052613  272633 out.go:177] * Using the docker driver based on user configuration
	I0131 14:36:09.053935  272633 start.go:298] selected driver: docker
	I0131 14:36:09.053951  272633 start.go:902] validating driver "docker" against <nil>
	I0131 14:36:09.053962  272633 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0131 14:36:09.056232  272633 out.go:177] 
	W0131 14:36:09.057507  272633 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0131 14:36:09.058752  272633 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-381612 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-381612

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-381612

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-381612

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-381612

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-381612

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-381612

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-381612

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-381612

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-381612

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-381612

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381612"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381612"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381612"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-381612

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381612"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381612"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-381612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-381612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-381612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-381612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-381612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-381612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-381612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-381612" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381612"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381612"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381612"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381612"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381612"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-381612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-381612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-381612" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381612"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381612"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381612"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381612"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381612"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18007-117277/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 31 Jan 2024 14:36:01 UTC
provider: minikube.sigs.k8s.io
version: v1.26.0
name: cluster_info
server: https://192.168.94.2:8443
name: missing-upgrade-096524
contexts:
- context:
cluster: missing-upgrade-096524
extensions:
- extension:
last-update: Wed, 31 Jan 2024 14:36:01 UTC
provider: minikube.sigs.k8s.io
version: v1.26.0
name: context_info
namespace: default
user: missing-upgrade-096524
name: missing-upgrade-096524
current-context: missing-upgrade-096524
kind: Config
preferences: {}
users:
- name: missing-upgrade-096524
user:
client-certificate: /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/missing-upgrade-096524/client.crt
client-key: /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/missing-upgrade-096524/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-381612

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381612"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381612"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381612"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381612"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381612"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381612"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381612"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381612"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381612"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381612"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381612"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381612"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381612"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381612"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381612"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381612"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381612"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381612"

                                                
                                                
----------------------- debugLogs end: false-381612 [took: 3.559427494s] --------------------------------
helpers_test.go:175: Cleaning up "false-381612" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-381612
--- PASS: TestNetworkPlugins/group/false (3.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-053734
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-053734: (1.226830702s)
--- PASS: TestNoKubernetes/serial/Stop (1.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-053734 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-053734 --driver=docker  --container-runtime=containerd: (6.337158655s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-053734 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-053734 "sudo systemctl is-active --quiet service kubelet": exit status 1 (292.551298ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-085959
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.00s)

                                                
                                    
x
+
TestPause/serial/Start (56.62s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-200538 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-200538 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (56.618915209s)
--- PASS: TestPause/serial/Start (56.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (52.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-381612 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
E0131 14:39:05.379764  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/functional-259653/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-381612 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (52.573298403s)
--- PASS: TestNetworkPlugins/group/auto/Start (52.57s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (4.94s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-200538 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-200538 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4.923793955s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (4.94s)

                                                
                                    
x
+
TestPause/serial/Pause (0.74s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-200538 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.74s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.33s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-200538 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-200538 --output=json --layout=cluster: exit status 2 (326.693368ms)

                                                
                                                
-- stdout --
	{"Name":"pause-200538","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-200538","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.33s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.61s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-200538 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.61s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.85s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-200538 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.85s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.56s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-200538 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-200538 --alsologtostderr -v=5: (2.559618721s)
--- PASS: TestPause/serial/DeletePaused (2.56s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (13.73s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (13.679707506s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-200538
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-200538: exit status 1 (16.94975ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-200538: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (13.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-381612 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-381612 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-b5d5n" [0f317cdc-ddef-4d26-9ce9-988a4fb1792d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-b5d5n" [0f317cdc-ddef-4d26-9ce9-988a4fb1792d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004185198s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-381612 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-381612 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-381612 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (51.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-381612 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E0131 14:39:42.542550  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/ingress-addon-legacy-378599/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-381612 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (51.198831209s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (51.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (70.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-381612 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-381612 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m10.533504056s)
--- PASS: TestNetworkPlugins/group/calico/Start (70.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-rzpb2" [28322c74-5d11-4980-9855-43d0c94bcec0] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004640304s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-381612 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (8.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-381612 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-5s2jp" [44e703cc-0ead-4ff2-b8a5-8ad08b63785d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-5s2jp" [44e703cc-0ead-4ff2-b8a5-8ad08b63785d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 8.005012068s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (8.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (52.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-381612 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-381612 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (52.033613376s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (52.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-381612 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-381612 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-381612 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (38.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-381612 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-381612 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (38.230240207s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (38.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-lnxx4" [e1db1a2a-5742-4f9e-b113-3a46fba047b3] Running
E0131 14:41:05.588706  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/ingress-addon-legacy-378599/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005164664s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-381612 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-381612 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-c4wn9" [626fb670-6259-404a-b5ea-418020cd0a5b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-c4wn9" [626fb670-6259-404a-b5ea-418020cd0a5b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.006147735s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-381612 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-381612 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-381612 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-381612 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-381612 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-rkvz2" [10bb73c9-c607-4b87-91a7-713899fa0466] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-rkvz2" [10bb73c9-c607-4b87-91a7-713899fa0466] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.006053015s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-381612 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-381612 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-381612 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-381612 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-cnzgg" [fa700636-411a-431b-8bc1-4c888ff63e92] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-cnzgg" [fa700636-411a-431b-8bc1-4c888ff63e92] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.005228629s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-381612 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (56.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-381612 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-381612 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (56.344677188s)
--- PASS: TestNetworkPlugins/group/flannel/Start (56.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-381612 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-381612 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-381612 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (41.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-381612 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-381612 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (41.560507925s)
--- PASS: TestNetworkPlugins/group/bridge/Start (41.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (121.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-501118 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-501118 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (2m1.887064411s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (121.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-qnzfx" [0e8ea260-42c8-4b9a-8f19-2088460045fb] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004325212s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-381612 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-381612 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-381612 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-lkw4h" [458473cf-d069-4baf-a322-46dfd5572535] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-lkw4h" [458473cf-d069-4baf-a322-46dfd5572535] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.00450635s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-381612 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-4pwhz" [985f7942-6135-4d7e-b2eb-ffc05e9b9c26] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0131 14:42:42.335667  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/functional-259653/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-4pwhz" [985f7942-6135-4d7e-b2eb-ffc05e9b9c26] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004800601s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-381612 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-381612 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-381612 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-381612 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)
E0131 14:47:35.103862  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/flannel-381612/client.crt: no such file or directory
E0131 14:47:35.109136  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/flannel-381612/client.crt: no such file or directory
E0131 14:47:35.119449  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/flannel-381612/client.crt: no such file or directory
E0131 14:47:35.139733  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/flannel-381612/client.crt: no such file or directory
E0131 14:47:35.180050  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/flannel-381612/client.crt: no such file or directory
E0131 14:47:35.260396  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/flannel-381612/client.crt: no such file or directory
E0131 14:47:35.420806  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/flannel-381612/client.crt: no such file or directory
E0131 14:47:35.741336  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/flannel-381612/client.crt: no such file or directory
E0131 14:47:36.382240  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/flannel-381612/client.crt: no such file or directory
E0131 14:47:37.662858  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/flannel-381612/client.crt: no such file or directory
E0131 14:47:40.223092  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/flannel-381612/client.crt: no such file or directory
E0131 14:47:41.866487  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/bridge-381612/client.crt: no such file or directory
E0131 14:47:41.871774  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/bridge-381612/client.crt: no such file or directory
E0131 14:47:41.882041  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/bridge-381612/client.crt: no such file or directory
E0131 14:47:41.902311  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/bridge-381612/client.crt: no such file or directory
E0131 14:47:41.942602  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/bridge-381612/client.crt: no such file or directory
E0131 14:47:42.023064  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/bridge-381612/client.crt: no such file or directory
E0131 14:47:42.183716  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/bridge-381612/client.crt: no such file or directory
E0131 14:47:42.336003  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/functional-259653/client.crt: no such file or directory
E0131 14:47:42.504328  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/bridge-381612/client.crt: no such file or directory
E0131 14:47:43.144509  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/bridge-381612/client.crt: no such file or directory
E0131 14:47:44.425385  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/bridge-381612/client.crt: no such file or directory
E0131 14:47:45.344297  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/flannel-381612/client.crt: no such file or directory
E0131 14:47:46.985598  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/bridge-381612/client.crt: no such file or directory
E0131 14:47:51.839066  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/custom-flannel-381612/client.crt: no such file or directory
E0131 14:47:52.106705  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/bridge-381612/client.crt: no such file or directory
E0131 14:47:55.585108  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/flannel-381612/client.crt: no such file or directory
E0131 14:48:00.297643  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/enable-default-cni-381612/client.crt: no such file or directory
E0131 14:48:02.347565  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/bridge-381612/client.crt: no such file or directory
E0131 14:48:07.330443  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/kindnet-381612/client.crt: no such file or directory
E0131 14:48:16.065316  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/flannel-381612/client.crt: no such file or directory
E0131 14:48:22.828574  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/bridge-381612/client.crt: no such file or directory
E0131 14:48:45.811197  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/calico-381612/client.crt: no such file or directory
E0131 14:48:57.026395  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/flannel-381612/client.crt: no such file or directory
E0131 14:49:03.789756  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/bridge-381612/client.crt: no such file or directory
E0131 14:49:10.871568  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/old-k8s-version-501118/client.crt: no such file or directory
E0131 14:49:10.876841  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/old-k8s-version-501118/client.crt: no such file or directory
E0131 14:49:10.887125  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/old-k8s-version-501118/client.crt: no such file or directory
E0131 14:49:10.907371  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/old-k8s-version-501118/client.crt: no such file or directory
E0131 14:49:10.947726  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/old-k8s-version-501118/client.crt: no such file or directory
E0131 14:49:11.028061  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/old-k8s-version-501118/client.crt: no such file or directory
E0131 14:49:11.188429  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/old-k8s-version-501118/client.crt: no such file or directory
E0131 14:49:11.509010  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/old-k8s-version-501118/client.crt: no such file or directory
E0131 14:49:12.149222  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/old-k8s-version-501118/client.crt: no such file or directory
E0131 14:49:13.430312  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/old-k8s-version-501118/client.crt: no such file or directory
E0131 14:49:13.759934  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/custom-flannel-381612/client.crt: no such file or directory
E0131 14:49:15.990695  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/old-k8s-version-501118/client.crt: no such file or directory
E0131 14:49:21.110979  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/old-k8s-version-501118/client.crt: no such file or directory
E0131 14:49:21.575828  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/auto-381612/client.crt: no such file or directory
E0131 14:49:22.218440  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/enable-default-cni-381612/client.crt: no such file or directory
E0131 14:49:31.351362  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/old-k8s-version-501118/client.crt: no such file or directory
E0131 14:49:42.542874  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/ingress-addon-legacy-378599/client.crt: no such file or directory
E0131 14:49:49.259910  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/auto-381612/client.crt: no such file or directory
E0131 14:49:51.832061  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/old-k8s-version-501118/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-381612 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-381612 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (64.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-870571 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-870571 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (1m4.943445737s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (64.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (54.68s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-875537 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-875537 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (54.681642502s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (54.68s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (53.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-522968 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-522968 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (53.33118076s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (53.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (7.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-875537 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [612fcbe9-56a4-4a7e-a9a3-49528322ba55] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [612fcbe9-56a4-4a7e-a9a3-49528322ba55] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 7.004246027s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-875537 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (7.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (7.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-501118 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a2df1cbf-d855-4489-940d-976945e176ee] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a2df1cbf-d855-4489-940d-976945e176ee] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 7.003793031s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-501118 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (7.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-875537 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-875537 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-875537 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-875537 --alsologtostderr -v=3: (11.927343807s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-870571 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2ea25691-1639-4898-84f7-85fe4d09d5cb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2ea25691-1639-4898-84f7-85fe4d09d5cb] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.003635955s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-870571 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-522968 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4d1db478-2d6e-4863-aea3-9f9eff668de3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4d1db478-2d6e-4863-aea3-9f9eff668de3] Running
E0131 14:44:21.575875  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/auto-381612/client.crt: no such file or directory
E0131 14:44:21.581191  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/auto-381612/client.crt: no such file or directory
E0131 14:44:21.591428  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/auto-381612/client.crt: no such file or directory
E0131 14:44:21.611661  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/auto-381612/client.crt: no such file or directory
E0131 14:44:21.652459  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/auto-381612/client.crt: no such file or directory
E0131 14:44:21.732755  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/auto-381612/client.crt: no such file or directory
E0131 14:44:21.893236  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/auto-381612/client.crt: no such file or directory
E0131 14:44:22.213453  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/auto-381612/client.crt: no such file or directory
E0131 14:44:22.854397  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/auto-381612/client.crt: no such file or directory
E0131 14:44:24.135086  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/auto-381612/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 7.003289477s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-522968 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.82s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-501118 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-501118 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-501118 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-501118 --alsologtostderr -v=3: (11.886321681s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.89s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-522968 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-522968 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-870571 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-870571 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (15.68s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-522968 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-522968 --alsologtostderr -v=3: (15.682583211s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (15.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (15.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-870571 --alsologtostderr -v=3
E0131 14:44:26.696211  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/auto-381612/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-870571 --alsologtostderr -v=3: (15.02429256s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (15.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-875537 -n embed-certs-875537
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-875537 -n embed-certs-875537: exit status 7 (80.153064ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-875537 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (340.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-875537 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-875537 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (5m39.761924636s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-875537 -n embed-certs-875537
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (340.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-501118 -n old-k8s-version-501118
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-501118 -n old-k8s-version-501118: exit status 7 (104.681413ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-501118 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (64.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-501118 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0
E0131 14:44:31.816965  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/auto-381612/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-501118 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (1m3.939405632s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-501118 -n old-k8s-version-501118
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (64.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-870571 -n no-preload-870571
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-870571 -n no-preload-870571: exit status 7 (99.830334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-870571 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-522968 -n default-k8s-diff-port-522968
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-522968 -n default-k8s-diff-port-522968: exit status 7 (85.994372ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-522968 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (336.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-870571 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-870571 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (5m36.420511101s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-870571 -n no-preload-870571
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (336.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (329.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-522968 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
E0131 14:44:42.057176  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/auto-381612/client.crt: no such file or directory
E0131 14:44:42.542955  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/ingress-addon-legacy-378599/client.crt: no such file or directory
E0131 14:45:02.537732  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/auto-381612/client.crt: no such file or directory
E0131 14:45:23.487588  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/kindnet-381612/client.crt: no such file or directory
E0131 14:45:23.492874  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/kindnet-381612/client.crt: no such file or directory
E0131 14:45:23.503163  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/kindnet-381612/client.crt: no such file or directory
E0131 14:45:23.523429  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/kindnet-381612/client.crt: no such file or directory
E0131 14:45:23.563697  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/kindnet-381612/client.crt: no such file or directory
E0131 14:45:23.644050  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/kindnet-381612/client.crt: no such file or directory
E0131 14:45:23.804272  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/kindnet-381612/client.crt: no such file or directory
E0131 14:45:24.125071  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/kindnet-381612/client.crt: no such file or directory
E0131 14:45:24.765489  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/kindnet-381612/client.crt: no such file or directory
E0131 14:45:26.046039  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/kindnet-381612/client.crt: no such file or directory
E0131 14:45:28.606774  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/kindnet-381612/client.crt: no such file or directory
E0131 14:45:33.727679  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/kindnet-381612/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-522968 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (5m29.100780949s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-522968 -n default-k8s-diff-port-522968
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (329.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (34.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-896jd" [3c0aa191-aaf7-4c7d-856e-86a1686a606d] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0131 14:45:43.498608  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/auto-381612/client.crt: no such file or directory
E0131 14:45:43.968112  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/kindnet-381612/client.crt: no such file or directory
E0131 14:46:01.965834  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/calico-381612/client.crt: no such file or directory
E0131 14:46:01.971071  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/calico-381612/client.crt: no such file or directory
E0131 14:46:01.981294  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/calico-381612/client.crt: no such file or directory
E0131 14:46:02.001971  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/calico-381612/client.crt: no such file or directory
E0131 14:46:02.042276  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/calico-381612/client.crt: no such file or directory
E0131 14:46:02.122796  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/calico-381612/client.crt: no such file or directory
E0131 14:46:02.283775  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/calico-381612/client.crt: no such file or directory
E0131 14:46:02.604344  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/calico-381612/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-896jd" [3c0aa191-aaf7-4c7d-856e-86a1686a606d] Running
E0131 14:46:03.245416  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/calico-381612/client.crt: no such file or directory
E0131 14:46:04.448721  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/kindnet-381612/client.crt: no such file or directory
E0131 14:46:04.526298  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/calico-381612/client.crt: no such file or directory
E0131 14:46:07.087006  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/calico-381612/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 34.003852486s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (34.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-896jd" [3c0aa191-aaf7-4c7d-856e-86a1686a606d] Running
E0131 14:46:12.207531  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/calico-381612/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003159644s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-501118 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-501118 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-501118 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-501118 -n old-k8s-version-501118
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-501118 -n old-k8s-version-501118: exit status 2 (305.142268ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-501118 -n old-k8s-version-501118
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-501118 -n old-k8s-version-501118: exit status 2 (310.138036ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-501118 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-501118 -n old-k8s-version-501118
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-501118 -n old-k8s-version-501118
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.75s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (32.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-273089 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E0131 14:46:22.448467  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/calico-381612/client.crt: no such file or directory
E0131 14:46:29.915140  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/custom-flannel-381612/client.crt: no such file or directory
E0131 14:46:29.920412  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/custom-flannel-381612/client.crt: no such file or directory
E0131 14:46:29.930664  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/custom-flannel-381612/client.crt: no such file or directory
E0131 14:46:29.950957  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/custom-flannel-381612/client.crt: no such file or directory
E0131 14:46:29.991306  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/custom-flannel-381612/client.crt: no such file or directory
E0131 14:46:30.071654  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/custom-flannel-381612/client.crt: no such file or directory
E0131 14:46:30.232085  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/custom-flannel-381612/client.crt: no such file or directory
E0131 14:46:30.552675  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/custom-flannel-381612/client.crt: no such file or directory
E0131 14:46:31.193759  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/custom-flannel-381612/client.crt: no such file or directory
E0131 14:46:32.474526  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/custom-flannel-381612/client.crt: no such file or directory
E0131 14:46:35.035712  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/custom-flannel-381612/client.crt: no such file or directory
E0131 14:46:38.372580  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/enable-default-cni-381612/client.crt: no such file or directory
E0131 14:46:38.377881  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/enable-default-cni-381612/client.crt: no such file or directory
E0131 14:46:38.388151  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/enable-default-cni-381612/client.crt: no such file or directory
E0131 14:46:38.408464  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/enable-default-cni-381612/client.crt: no such file or directory
E0131 14:46:38.449594  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/enable-default-cni-381612/client.crt: no such file or directory
E0131 14:46:38.529922  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/enable-default-cni-381612/client.crt: no such file or directory
E0131 14:46:38.690682  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/enable-default-cni-381612/client.crt: no such file or directory
E0131 14:46:39.011816  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/enable-default-cni-381612/client.crt: no such file or directory
E0131 14:46:39.652864  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/enable-default-cni-381612/client.crt: no such file or directory
E0131 14:46:40.156578  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/custom-flannel-381612/client.crt: no such file or directory
E0131 14:46:40.933502  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/enable-default-cni-381612/client.crt: no such file or directory
E0131 14:46:42.929507  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/calico-381612/client.crt: no such file or directory
E0131 14:46:43.495462  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/enable-default-cni-381612/client.crt: no such file or directory
E0131 14:46:45.409452  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/kindnet-381612/client.crt: no such file or directory
E0131 14:46:48.615939  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/enable-default-cni-381612/client.crt: no such file or directory
E0131 14:46:50.397683  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/custom-flannel-381612/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-273089 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (32.159056743s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (32.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-273089 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-273089 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.069174478s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-273089 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-273089 --alsologtostderr -v=3: (1.200390725s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-273089 -n newest-cni-273089
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-273089 -n newest-cni-273089: exit status 7 (80.270318ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-273089 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (25.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-273089 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E0131 14:46:58.857088  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/enable-default-cni-381612/client.crt: no such file or directory
E0131 14:47:05.419002  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/auto-381612/client.crt: no such file or directory
E0131 14:47:06.918665  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/addons-214491/client.crt: no such file or directory
E0131 14:47:10.878664  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/custom-flannel-381612/client.crt: no such file or directory
E0131 14:47:19.337344  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/enable-default-cni-381612/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-273089 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (25.322589673s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-273089 -n newest-cni-273089
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (25.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-273089 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-273089 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-273089 -n newest-cni-273089
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-273089 -n newest-cni-273089: exit status 2 (307.077778ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-273089 -n newest-cni-273089
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-273089 -n newest-cni-273089: exit status 2 (307.990134ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-273089 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-273089 -n newest-cni-273089
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-273089 -n newest-cni-273089
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.64s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-tbg9x" [b80a282b-5068-4850-8f15-4b4ba609cc87] Running
E0131 14:50:09.966729  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/addons-214491/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004725322s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (9.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-m4dsw" [0d8ff992-f63e-4e85-b967-4faeeabc910d] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-m4dsw" [0d8ff992-f63e-4e85-b967-4faeeabc910d] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.004114706s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (9.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-tbg9x" [b80a282b-5068-4850-8f15-4b4ba609cc87] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004525815s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-875537 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (14.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-rvwrf" [9e34dad5-068c-4cd7-9634-ebcada11c337] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0131 14:50:18.947176  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/flannel-381612/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-rvwrf" [9e34dad5-068c-4cd7-9634-ebcada11c337] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.004562593s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (14.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-m4dsw" [0d8ff992-f63e-4e85-b967-4faeeabc910d] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004217703s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-522968 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-875537 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-875537 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-875537 -n embed-certs-875537
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-875537 -n embed-certs-875537: exit status 2 (308.718575ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-875537 -n embed-certs-875537
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-875537 -n embed-certs-875537: exit status 2 (323.873935ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-875537 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-875537 -n embed-certs-875537
E0131 14:50:23.487243  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/kindnet-381612/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-875537 -n embed-certs-875537
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-522968 image list --format=json
E0131 14:50:25.710717  124059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/bridge-381612/client.crt: no such file or directory
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-522968 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-522968 -n default-k8s-diff-port-522968
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-522968 -n default-k8s-diff-port-522968: exit status 2 (318.350072ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-522968 -n default-k8s-diff-port-522968
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-522968 -n default-k8s-diff-port-522968: exit status 2 (305.53921ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-522968 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-522968 -n default-k8s-diff-port-522968
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-522968 -n default-k8s-diff-port-522968
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-rvwrf" [9e34dad5-068c-4cd7-9634-ebcada11c337] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003503528s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-870571 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-870571 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.68s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-870571 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-870571 -n no-preload-870571
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-870571 -n no-preload-870571: exit status 2 (297.35072ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-870571 -n no-preload-870571
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-870571 -n no-preload-870571: exit status 2 (298.640303ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-870571 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-870571 -n no-preload-870571
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-870571 -n no-preload-870571
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.68s)

                                                
                                    

Test skip (26/320)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-381612 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-381612

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-381612

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-381612

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-381612

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-381612

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-381612

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-381612

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-381612

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-381612

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-381612

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381612"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381612"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381612"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-381612

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381612"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381612"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-381612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-381612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-381612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-381612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-381612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-381612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-381612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-381612" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381612"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381612"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381612"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381612"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381612"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-381612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-381612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-381612" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381612"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381612"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381612"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381612"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381612"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18007-117277/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 31 Jan 2024 14:36:01 UTC
provider: minikube.sigs.k8s.io
version: v1.26.0
name: cluster_info
server: https://192.168.94.2:8443
name: missing-upgrade-096524
contexts:
- context:
cluster: missing-upgrade-096524
extensions:
- extension:
last-update: Wed, 31 Jan 2024 14:36:01 UTC
provider: minikube.sigs.k8s.io
version: v1.26.0
name: context_info
namespace: default
user: missing-upgrade-096524
name: missing-upgrade-096524
current-context: missing-upgrade-096524
kind: Config
preferences: {}
users:
- name: missing-upgrade-096524
user:
client-certificate: /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/missing-upgrade-096524/client.crt
client-key: /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/missing-upgrade-096524/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-381612

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381612"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381612"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381612"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381612"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381612"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381612"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381612"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381612"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381612"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381612"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381612"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381612"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381612"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381612"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381612"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381612"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381612"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381612"

                                                
                                                
----------------------- debugLogs end: kubenet-381612 [took: 3.78610031s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-381612" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-381612
--- SKIP: TestNetworkPlugins/group/kubenet (3.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-381612 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-381612

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-381612

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-381612

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-381612

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-381612

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-381612

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-381612

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-381612

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-381612

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-381612

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381612"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381612"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381612"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-381612

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381612"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381612"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-381612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-381612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-381612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-381612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-381612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-381612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-381612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-381612" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381612"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381612"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381612"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381612"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381612"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-381612

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-381612

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-381612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-381612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-381612

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-381612

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-381612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-381612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-381612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-381612" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-381612" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381612"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381612"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381612"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381612"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381612"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18007-117277/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 31 Jan 2024 14:36:01 UTC
provider: minikube.sigs.k8s.io
version: v1.26.0
name: cluster_info
server: https://192.168.94.2:8443
name: missing-upgrade-096524
contexts:
- context:
cluster: missing-upgrade-096524
extensions:
- extension:
last-update: Wed, 31 Jan 2024 14:36:01 UTC
provider: minikube.sigs.k8s.io
version: v1.26.0
name: context_info
namespace: default
user: missing-upgrade-096524
name: missing-upgrade-096524
current-context: missing-upgrade-096524
kind: Config
preferences: {}
users:
- name: missing-upgrade-096524
user:
client-certificate: /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/missing-upgrade-096524/client.crt
client-key: /home/jenkins/minikube-integration/18007-117277/.minikube/profiles/missing-upgrade-096524/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-381612

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381612"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381612"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381612"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381612"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381612"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381612"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381612"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381612"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381612"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381612"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381612"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381612"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381612"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381612"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381612"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381612"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381612"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-381612" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381612"

                                                
                                                
----------------------- debugLogs end: cilium-381612 [took: 4.220139377s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-381612" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-381612
--- SKIP: TestNetworkPlugins/group/cilium (4.40s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-380033" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-380033
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
Copied to clipboard