Test Report: Docker_Linux_containerd 17957

                    
                      89df817c127b40a78141e8021123a5a55115ceb7:2024-01-15:32713
                    
                

Test fail (1/320)

Order failed test Duration
45 TestAddons/parallel/Headlamp 2.92
x
+
TestAddons/parallel/Headlamp (2.92s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-391328 --alsologtostderr -v=1
addons_test.go:824: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-391328 --alsologtostderr -v=1: exit status 11 (325.028039ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 11:39:33.926383  128808 out.go:296] Setting OutFile to fd 1 ...
	I0115 11:39:33.926499  128808 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 11:39:33.926510  128808 out.go:309] Setting ErrFile to fd 2...
	I0115 11:39:33.926517  128808 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 11:39:33.926814  128808 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17957-106484/.minikube/bin
	I0115 11:39:33.927085  128808 mustload.go:65] Loading cluster: addons-391328
	I0115 11:39:33.927436  128808 config.go:182] Loaded profile config "addons-391328": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 11:39:33.927458  128808 addons.go:597] checking whether the cluster is paused
	I0115 11:39:33.927576  128808 config.go:182] Loaded profile config "addons-391328": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 11:39:33.927589  128808 host.go:66] Checking if "addons-391328" exists ...
	I0115 11:39:33.927986  128808 cli_runner.go:164] Run: docker container inspect addons-391328 --format={{.State.Status}}
	I0115 11:39:33.944189  128808 ssh_runner.go:195] Run: systemctl --version
	I0115 11:39:33.944251  128808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391328
	I0115 11:39:33.965460  128808 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17957-106484/.minikube/machines/addons-391328/id_rsa Username:docker}
	I0115 11:39:34.057083  128808 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0115 11:39:34.057164  128808 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 11:39:34.099761  128808 cri.go:89] found id: "eed08fdf92d05ef5aafc50697645610b37369c0a528121d6bbdd988e5a353dfe"
	I0115 11:39:34.099796  128808 cri.go:89] found id: "190b4a7cbf4f336c37267f5f57dc672ede0fabcc1848e02927278f06633fa28c"
	I0115 11:39:34.099801  128808 cri.go:89] found id: "e33c150457c2486fa69f1fb6f24c3868b63f50d568eff18a090aa94e39376af5"
	I0115 11:39:34.099807  128808 cri.go:89] found id: "f6885732e64e83db8c71d78505a2db642e99d28eab1f570e0b36b78da60f3820"
	I0115 11:39:34.099811  128808 cri.go:89] found id: "3fadc2bb0c28a90500a3fcf9220b592870b43b715e06b4d4546bddc599be6d39"
	I0115 11:39:34.099823  128808 cri.go:89] found id: "ab46f68fd275d4eacefb74bcf41cc62aa811de43f45a5aea6c4b1a61c5bff9d8"
	I0115 11:39:34.099826  128808 cri.go:89] found id: "f1a44409f4f94c3ac643490f0fdfeda22b3b99a9a88c0d3158834937e1b23b15"
	I0115 11:39:34.099830  128808 cri.go:89] found id: "fc5324adde3f6deb5d1633d4381d192e337a7c7b6195415036cf3063692fc645"
	I0115 11:39:34.099833  128808 cri.go:89] found id: "2862e7247c61c477bbe65b2e4ea9ef24a074e80dcafbf3a51f7673212f6b4028"
	I0115 11:39:34.099847  128808 cri.go:89] found id: "f8f6e870037b0b0ec246be788b905f9e2556e075ea54ae4e433a42f3198cc788"
	I0115 11:39:34.099857  128808 cri.go:89] found id: "63bee67192da9fc1eccaff4128911d5bf837aa554fdd102f83eda2d27a2d8a95"
	I0115 11:39:34.099863  128808 cri.go:89] found id: "6848d7d748f0c8480b4927b54b1d78a7fd06347fc17e71cd9e3d7f4ad6ad4f4e"
	I0115 11:39:34.099873  128808 cri.go:89] found id: "526de9ff2cdbcb796430e504215045647ef594c29269fa50e4a0999ee8c52123"
	I0115 11:39:34.099881  128808 cri.go:89] found id: "295e88588854d8840001ba1e7e81454f885b7b2b78cefaa58259d88b1e9e7d76"
	I0115 11:39:34.099890  128808 cri.go:89] found id: "885ccee66b992944d9ea4f29cf585b0c093362873cf48e1b783ad88a8e6a5dcd"
	I0115 11:39:34.099896  128808 cri.go:89] found id: "1b78d94c40a3d08f597620f25b2706c3c5ad2db2b11d1e2bf4ce92c09b23ddeb"
	I0115 11:39:34.099901  128808 cri.go:89] found id: "ac5a448b73cd722ce5270ef745c361955f6e3a40df5042731527b7a446ecee39"
	I0115 11:39:34.099909  128808 cri.go:89] found id: "0818e33a39ae7d33bbaacb8b6fa299258ad68586e3c0afffed6b84391670d256"
	I0115 11:39:34.099918  128808 cri.go:89] found id: ""
	I0115 11:39:34.099978  128808 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0115 11:39:34.167628  128808 out.go:177] 
	W0115 11:39:34.169244  128808 out.go:239] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-01-15T11:39:34Z" level=error msg="stat /run/containerd/runc/k8s.io/efb9a270794672fd4d580bea158675841f39ed66d873084a28b91d2d2ffb433a: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-01-15T11:39:34Z" level=error msg="stat /run/containerd/runc/k8s.io/efb9a270794672fd4d580bea158675841f39ed66d873084a28b91d2d2ffb433a: no such file or directory"
	
	W0115 11:39:34.169278  128808 out.go:239] * 
	* 
	W0115 11:39:34.173691  128808 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0115 11:39:34.175531  128808 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:826: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-391328 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-391328
helpers_test.go:235: (dbg) docker inspect addons-391328:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "71acac0eada65202fe57777264065836e810c9d0d6ceb78654cc9798d9dc7092",
	        "Created": "2024-01-15T11:37:18.892628449Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 115292,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-15T11:37:19.186552431Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9941de2e064a4a6a7155bfc66cedd2854b8c725b77bb8d4eaf81bef39f951dd7",
	        "ResolvConfPath": "/var/lib/docker/containers/71acac0eada65202fe57777264065836e810c9d0d6ceb78654cc9798d9dc7092/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/71acac0eada65202fe57777264065836e810c9d0d6ceb78654cc9798d9dc7092/hostname",
	        "HostsPath": "/var/lib/docker/containers/71acac0eada65202fe57777264065836e810c9d0d6ceb78654cc9798d9dc7092/hosts",
	        "LogPath": "/var/lib/docker/containers/71acac0eada65202fe57777264065836e810c9d0d6ceb78654cc9798d9dc7092/71acac0eada65202fe57777264065836e810c9d0d6ceb78654cc9798d9dc7092-json.log",
	        "Name": "/addons-391328",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-391328:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-391328",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e07a34baeb1cbe516d67a71afaff528e67f5778efad7a5825fa52170f1db4990-init/diff:/var/lib/docker/overlay2/bbf0039768421d87c488cd6c0112bf6d8c12cfe622242924e63ea6d2ac4b768f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e07a34baeb1cbe516d67a71afaff528e67f5778efad7a5825fa52170f1db4990/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e07a34baeb1cbe516d67a71afaff528e67f5778efad7a5825fa52170f1db4990/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e07a34baeb1cbe516d67a71afaff528e67f5778efad7a5825fa52170f1db4990/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-391328",
	                "Source": "/var/lib/docker/volumes/addons-391328/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-391328",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-391328",
	                "name.minikube.sigs.k8s.io": "addons-391328",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "09586223faee6c5f1bb35840e0eaa8ba9988d3e6efbe0211e7328f76d5acd228",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/09586223faee",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-391328": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "71acac0eada6",
	                        "addons-391328"
	                    ],
	                    "NetworkID": "c18d018ac507251fffd7ac9f75b11f9755d75395b377dd9b4f7d2251704713f8",
	                    "EndpointID": "74bb439933545bf79e95f80b54289481b3df590bf5862c8a69044da2886a3edc",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-391328 -n addons-391328
helpers_test.go:244: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-391328 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-391328 logs -n 25: (1.579411795s)
helpers_test.go:252: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-128970                                                                     | download-only-128970   | jenkins | v1.32.0 | 15 Jan 24 11:36 UTC | 15 Jan 24 11:36 UTC |
	| delete  | -p download-only-880748                                                                     | download-only-880748   | jenkins | v1.32.0 | 15 Jan 24 11:36 UTC | 15 Jan 24 11:36 UTC |
	| delete  | -p download-only-632060                                                                     | download-only-632060   | jenkins | v1.32.0 | 15 Jan 24 11:36 UTC | 15 Jan 24 11:36 UTC |
	| start   | --download-only -p                                                                          | download-docker-788749 | jenkins | v1.32.0 | 15 Jan 24 11:36 UTC |                     |
	|         | download-docker-788749                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	| delete  | -p download-docker-788749                                                                   | download-docker-788749 | jenkins | v1.32.0 | 15 Jan 24 11:36 UTC | 15 Jan 24 11:36 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-483395   | jenkins | v1.32.0 | 15 Jan 24 11:36 UTC |                     |
	|         | binary-mirror-483395                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:33409                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-483395                                                                     | binary-mirror-483395   | jenkins | v1.32.0 | 15 Jan 24 11:36 UTC | 15 Jan 24 11:36 UTC |
	| addons  | enable dashboard -p                                                                         | addons-391328          | jenkins | v1.32.0 | 15 Jan 24 11:36 UTC |                     |
	|         | addons-391328                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-391328          | jenkins | v1.32.0 | 15 Jan 24 11:36 UTC |                     |
	|         | addons-391328                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-391328 --wait=true                                                                | addons-391328          | jenkins | v1.32.0 | 15 Jan 24 11:36 UTC | 15 Jan 24 11:39 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --driver=docker                                                               |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |         |                     |                     |
	| addons  | addons-391328 addons disable                                                                | addons-391328          | jenkins | v1.32.0 | 15 Jan 24 11:39 UTC | 15 Jan 24 11:39 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-391328 addons                                                                        | addons-391328          | jenkins | v1.32.0 | 15 Jan 24 11:39 UTC | 15 Jan 24 11:39 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-391328 ip                                                                            | addons-391328          | jenkins | v1.32.0 | 15 Jan 24 11:39 UTC | 15 Jan 24 11:39 UTC |
	| addons  | addons-391328 addons disable                                                                | addons-391328          | jenkins | v1.32.0 | 15 Jan 24 11:39 UTC | 15 Jan 24 11:39 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-391328          | jenkins | v1.32.0 | 15 Jan 24 11:39 UTC | 15 Jan 24 11:39 UTC |
	|         | addons-391328                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-391328 ssh cat                                                                       | addons-391328          | jenkins | v1.32.0 | 15 Jan 24 11:39 UTC | 15 Jan 24 11:39 UTC |
	|         | /opt/local-path-provisioner/pvc-495a1e45-6730-4456-9b4a-84b12188efc5_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-391328 addons disable                                                                | addons-391328          | jenkins | v1.32.0 | 15 Jan 24 11:39 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-391328          | jenkins | v1.32.0 | 15 Jan 24 11:39 UTC | 15 Jan 24 11:39 UTC |
	|         | -p addons-391328                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-391328 ssh curl -s                                                                   | addons-391328          | jenkins | v1.32.0 | 15 Jan 24 11:39 UTC | 15 Jan 24 11:39 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-391328 ip                                                                            | addons-391328          | jenkins | v1.32.0 | 15 Jan 24 11:39 UTC | 15 Jan 24 11:39 UTC |
	| addons  | addons-391328 addons disable                                                                | addons-391328          | jenkins | v1.32.0 | 15 Jan 24 11:39 UTC | 15 Jan 24 11:39 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-391328 addons disable                                                                | addons-391328          | jenkins | v1.32.0 | 15 Jan 24 11:39 UTC |                     |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-391328          | jenkins | v1.32.0 | 15 Jan 24 11:39 UTC | 15 Jan 24 11:39 UTC |
	|         | addons-391328                                                                               |                        |         |         |                     |                     |
	| addons  | addons-391328 addons                                                                        | addons-391328          | jenkins | v1.32.0 | 15 Jan 24 11:39 UTC |                     |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-391328          | jenkins | v1.32.0 | 15 Jan 24 11:39 UTC |                     |
	|         | -p addons-391328                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/15 11:36:57
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0115 11:36:57.522651  114677 out.go:296] Setting OutFile to fd 1 ...
	I0115 11:36:57.522908  114677 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 11:36:57.522919  114677 out.go:309] Setting ErrFile to fd 2...
	I0115 11:36:57.522924  114677 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 11:36:57.523092  114677 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17957-106484/.minikube/bin
	I0115 11:36:57.523691  114677 out.go:303] Setting JSON to false
	I0115 11:36:57.524564  114677 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":8367,"bootTime":1705310251,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0115 11:36:57.524625  114677 start.go:138] virtualization: kvm guest
	I0115 11:36:57.526944  114677 out.go:177] * [addons-391328] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0115 11:36:57.528465  114677 out.go:177]   - MINIKUBE_LOCATION=17957
	I0115 11:36:57.529805  114677 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 11:36:57.528532  114677 notify.go:220] Checking for updates...
	I0115 11:36:57.532450  114677 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17957-106484/kubeconfig
	I0115 11:36:57.534059  114677 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17957-106484/.minikube
	I0115 11:36:57.535548  114677 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0115 11:36:57.537033  114677 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0115 11:36:57.538709  114677 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 11:36:57.560518  114677 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0115 11:36:57.560631  114677 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 11:36:57.610016  114677 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2024-01-15 11:36:57.601611108 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0115 11:36:57.610132  114677 docker.go:295] overlay module found
	I0115 11:36:57.612275  114677 out.go:177] * Using the docker driver based on user configuration
	I0115 11:36:57.613832  114677 start.go:298] selected driver: docker
	I0115 11:36:57.613843  114677 start.go:902] validating driver "docker" against <nil>
	I0115 11:36:57.613854  114677 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0115 11:36:57.614604  114677 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 11:36:57.663761  114677 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2024-01-15 11:36:57.655908043 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0115 11:36:57.663915  114677 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0115 11:36:57.664115  114677 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0115 11:36:57.666253  114677 out.go:177] * Using Docker driver with root privileges
	I0115 11:36:57.667709  114677 cni.go:84] Creating CNI manager for ""
	I0115 11:36:57.667725  114677 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0115 11:36:57.667736  114677 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0115 11:36:57.667748  114677 start_flags.go:321] config:
	{Name:addons-391328 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-391328 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containe
rd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 11:36:57.669253  114677 out.go:177] * Starting control plane node addons-391328 in cluster addons-391328
	I0115 11:36:57.670418  114677 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0115 11:36:57.671778  114677 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0115 11:36:57.672989  114677 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0115 11:36:57.673020  114677 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17957-106484/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4
	I0115 11:36:57.673030  114677 cache.go:56] Caching tarball of preloaded images
	I0115 11:36:57.673102  114677 preload.go:174] Found /home/jenkins/minikube-integration/17957-106484/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0115 11:36:57.673090  114677 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0115 11:36:57.673112  114677 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on containerd
	I0115 11:36:57.673481  114677 profile.go:148] Saving config to /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/addons-391328/config.json ...
	I0115 11:36:57.673503  114677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/addons-391328/config.json: {Name:mk5e02eb9ea88ad59751e0cd374d5fa6e0a640e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 11:36:57.687486  114677 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0115 11:36:57.687608  114677 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0115 11:36:57.687632  114677 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory, skipping pull
	I0115 11:36:57.687638  114677 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in cache, skipping pull
	I0115 11:36:57.687651  114677 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 as a tarball
	I0115 11:36:57.687662  114677 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 from local cache
	I0115 11:37:09.540820  114677 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 from cached tarball
	I0115 11:37:09.540897  114677 cache.go:194] Successfully downloaded all kic artifacts
	I0115 11:37:09.540950  114677 start.go:365] acquiring machines lock for addons-391328: {Name:mke07757636e1b9ec36aa7f26d085b6aa31a28b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 11:37:09.541069  114677 start.go:369] acquired machines lock for "addons-391328" in 95.348µs
	I0115 11:37:09.541096  114677 start.go:93] Provisioning new machine with config: &{Name:addons-391328 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-391328 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0115 11:37:09.541200  114677 start.go:125] createHost starting for "" (driver="docker")
	I0115 11:37:09.604810  114677 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0115 11:37:09.605197  114677 start.go:159] libmachine.API.Create for "addons-391328" (driver="docker")
	I0115 11:37:09.605246  114677 client.go:168] LocalClient.Create starting
	I0115 11:37:09.605401  114677 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17957-106484/.minikube/certs/ca.pem
	I0115 11:37:09.673252  114677 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17957-106484/.minikube/certs/cert.pem
	I0115 11:37:09.834432  114677 cli_runner.go:164] Run: docker network inspect addons-391328 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0115 11:37:09.850070  114677 cli_runner.go:211] docker network inspect addons-391328 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0115 11:37:09.850158  114677 network_create.go:281] running [docker network inspect addons-391328] to gather additional debugging logs...
	I0115 11:37:09.850174  114677 cli_runner.go:164] Run: docker network inspect addons-391328
	W0115 11:37:09.865158  114677 cli_runner.go:211] docker network inspect addons-391328 returned with exit code 1
	I0115 11:37:09.865204  114677 network_create.go:284] error running [docker network inspect addons-391328]: docker network inspect addons-391328: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-391328 not found
	I0115 11:37:09.865219  114677 network_create.go:286] output of [docker network inspect addons-391328]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-391328 not found
	
	** /stderr **
	I0115 11:37:09.865344  114677 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0115 11:37:09.881074  114677 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002cc3b40}
	I0115 11:37:09.881123  114677 network_create.go:124] attempt to create docker network addons-391328 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0115 11:37:09.881186  114677 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-391328 addons-391328
	I0115 11:37:10.020314  114677 network_create.go:108] docker network addons-391328 192.168.49.0/24 created
	I0115 11:37:10.020356  114677 kic.go:121] calculated static IP "192.168.49.2" for the "addons-391328" container
	I0115 11:37:10.020434  114677 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0115 11:37:10.035423  114677 cli_runner.go:164] Run: docker volume create addons-391328 --label name.minikube.sigs.k8s.io=addons-391328 --label created_by.minikube.sigs.k8s.io=true
	I0115 11:37:10.083062  114677 oci.go:103] Successfully created a docker volume addons-391328
	I0115 11:37:10.083150  114677 cli_runner.go:164] Run: docker run --rm --name addons-391328-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-391328 --entrypoint /usr/bin/test -v addons-391328:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
	I0115 11:37:13.771780  114677 cli_runner.go:217] Completed: docker run --rm --name addons-391328-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-391328 --entrypoint /usr/bin/test -v addons-391328:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib: (3.688574372s)
	I0115 11:37:13.771811  114677 oci.go:107] Successfully prepared a docker volume addons-391328
	I0115 11:37:13.771851  114677 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0115 11:37:13.771876  114677 kic.go:194] Starting extracting preloaded images to volume ...
	I0115 11:37:13.771937  114677 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17957-106484/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-391328:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0115 11:37:18.829198  114677 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17957-106484/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-391328:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir: (5.057222095s)
	I0115 11:37:18.829232  114677 kic.go:203] duration metric: took 5.057352 seconds to extract preloaded images to volume
	W0115 11:37:18.829398  114677 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0115 11:37:18.829494  114677 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0115 11:37:18.878691  114677 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-391328 --name addons-391328 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-391328 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-391328 --network addons-391328 --ip 192.168.49.2 --volume addons-391328:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0
	I0115 11:37:19.193974  114677 cli_runner.go:164] Run: docker container inspect addons-391328 --format={{.State.Running}}
	I0115 11:37:19.211598  114677 cli_runner.go:164] Run: docker container inspect addons-391328 --format={{.State.Status}}
	I0115 11:37:19.230118  114677 cli_runner.go:164] Run: docker exec addons-391328 stat /var/lib/dpkg/alternatives/iptables
	I0115 11:37:19.268524  114677 oci.go:144] the created container "addons-391328" has a running status.
	I0115 11:37:19.268557  114677 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17957-106484/.minikube/machines/addons-391328/id_rsa...
	I0115 11:37:19.427406  114677 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17957-106484/.minikube/machines/addons-391328/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0115 11:37:19.446439  114677 cli_runner.go:164] Run: docker container inspect addons-391328 --format={{.State.Status}}
	I0115 11:37:19.464584  114677 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0115 11:37:19.464614  114677 kic_runner.go:114] Args: [docker exec --privileged addons-391328 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0115 11:37:19.543458  114677 cli_runner.go:164] Run: docker container inspect addons-391328 --format={{.State.Status}}
	I0115 11:37:19.565936  114677 machine.go:88] provisioning docker machine ...
	I0115 11:37:19.565987  114677 ubuntu.go:169] provisioning hostname "addons-391328"
	I0115 11:37:19.566180  114677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391328
	I0115 11:37:19.585279  114677 main.go:141] libmachine: Using SSH client type: native
	I0115 11:37:19.585626  114677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0115 11:37:19.585640  114677 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-391328 && echo "addons-391328" | sudo tee /etc/hostname
	I0115 11:37:19.846650  114677 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-391328
	
	I0115 11:37:19.846739  114677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391328
	I0115 11:37:19.863843  114677 main.go:141] libmachine: Using SSH client type: native
	I0115 11:37:19.864191  114677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0115 11:37:19.864213  114677 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-391328' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-391328/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-391328' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0115 11:37:19.996002  114677 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 11:37:19.996034  114677 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17957-106484/.minikube CaCertPath:/home/jenkins/minikube-integration/17957-106484/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17957-106484/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17957-106484/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17957-106484/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17957-106484/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17957-106484/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17957-106484/.minikube}
	I0115 11:37:19.996055  114677 ubuntu.go:177] setting up certificates
	I0115 11:37:19.996065  114677 provision.go:83] configureAuth start
	I0115 11:37:19.996117  114677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-391328
	I0115 11:37:20.011933  114677 provision.go:138] copyHostCerts
	I0115 11:37:20.012005  114677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17957-106484/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17957-106484/.minikube/ca.pem (1082 bytes)
	I0115 11:37:20.012110  114677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17957-106484/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17957-106484/.minikube/cert.pem (1123 bytes)
	I0115 11:37:20.012215  114677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17957-106484/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17957-106484/.minikube/key.pem (1675 bytes)
	I0115 11:37:20.012270  114677 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17957-106484/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17957-106484/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17957-106484/.minikube/certs/ca-key.pem org=jenkins.addons-391328 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-391328]
	I0115 11:37:20.233638  114677 provision.go:172] copyRemoteCerts
	I0115 11:37:20.233707  114677 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0115 11:37:20.233742  114677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391328
	I0115 11:37:20.249907  114677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17957-106484/.minikube/machines/addons-391328/id_rsa Username:docker}
	I0115 11:37:20.344451  114677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-106484/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0115 11:37:20.364797  114677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-106484/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0115 11:37:20.384733  114677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-106484/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0115 11:37:20.404644  114677 provision.go:86] duration metric: configureAuth took 408.556673ms
	I0115 11:37:20.404680  114677 ubuntu.go:193] setting minikube options for container-runtime
	I0115 11:37:20.404882  114677 config.go:182] Loaded profile config "addons-391328": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 11:37:20.404898  114677 machine.go:91] provisioned docker machine in 838.933472ms
	I0115 11:37:20.404906  114677 client.go:171] LocalClient.Create took 10.799650271s
	I0115 11:37:20.404927  114677 start.go:167] duration metric: libmachine.API.Create for "addons-391328" took 10.799734086s
	I0115 11:37:20.404938  114677 start.go:300] post-start starting for "addons-391328" (driver="docker")
	I0115 11:37:20.404993  114677 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0115 11:37:20.405046  114677 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0115 11:37:20.405100  114677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391328
	I0115 11:37:20.420702  114677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17957-106484/.minikube/machines/addons-391328/id_rsa Username:docker}
	I0115 11:37:20.516690  114677 ssh_runner.go:195] Run: cat /etc/os-release
	I0115 11:37:20.519811  114677 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0115 11:37:20.519845  114677 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0115 11:37:20.519854  114677 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0115 11:37:20.519861  114677 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0115 11:37:20.519873  114677 filesync.go:126] Scanning /home/jenkins/minikube-integration/17957-106484/.minikube/addons for local assets ...
	I0115 11:37:20.519936  114677 filesync.go:126] Scanning /home/jenkins/minikube-integration/17957-106484/.minikube/files for local assets ...
	I0115 11:37:20.519957  114677 start.go:303] post-start completed in 115.01007ms
	I0115 11:37:20.520310  114677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-391328
	I0115 11:37:20.535574  114677 profile.go:148] Saving config to /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/addons-391328/config.json ...
	I0115 11:37:20.535891  114677 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 11:37:20.535949  114677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391328
	I0115 11:37:20.550991  114677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17957-106484/.minikube/machines/addons-391328/id_rsa Username:docker}
	I0115 11:37:20.640623  114677 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0115 11:37:20.644437  114677 start.go:128] duration metric: createHost completed in 11.103219424s
	I0115 11:37:20.644461  114677 start.go:83] releasing machines lock for "addons-391328", held for 11.103378851s
	I0115 11:37:20.644528  114677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-391328
	I0115 11:37:20.659745  114677 ssh_runner.go:195] Run: cat /version.json
	I0115 11:37:20.659797  114677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391328
	I0115 11:37:20.659855  114677 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0115 11:37:20.659920  114677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391328
	I0115 11:37:20.676316  114677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17957-106484/.minikube/machines/addons-391328/id_rsa Username:docker}
	I0115 11:37:20.676837  114677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17957-106484/.minikube/machines/addons-391328/id_rsa Username:docker}
	I0115 11:37:20.852822  114677 ssh_runner.go:195] Run: systemctl --version
	I0115 11:37:20.856832  114677 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0115 11:37:20.860718  114677 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0115 11:37:20.881896  114677 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0115 11:37:20.881974  114677 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0115 11:37:20.906197  114677 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0115 11:37:20.906226  114677 start.go:475] detecting cgroup driver to use...
	I0115 11:37:20.906273  114677 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0115 11:37:20.906322  114677 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0115 11:37:20.917067  114677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0115 11:37:20.926654  114677 docker.go:217] disabling cri-docker service (if available) ...
	I0115 11:37:20.926703  114677 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0115 11:37:20.938519  114677 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0115 11:37:20.950446  114677 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0115 11:37:21.021169  114677 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0115 11:37:21.101460  114677 docker.go:233] disabling docker service ...
	I0115 11:37:21.101543  114677 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0115 11:37:21.118465  114677 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0115 11:37:21.128314  114677 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0115 11:37:21.200970  114677 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0115 11:37:21.272976  114677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0115 11:37:21.283059  114677 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0115 11:37:21.296785  114677 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0115 11:37:21.305163  114677 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0115 11:37:21.313480  114677 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0115 11:37:21.313532  114677 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0115 11:37:21.321698  114677 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0115 11:37:21.329766  114677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0115 11:37:21.337825  114677 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0115 11:37:21.345828  114677 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0115 11:37:21.353291  114677 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0115 11:37:21.361451  114677 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0115 11:37:21.368352  114677 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0115 11:37:21.375065  114677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 11:37:21.443431  114677 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0115 11:37:21.542006  114677 start.go:522] Will wait 60s for socket path /run/containerd/containerd.sock
	I0115 11:37:21.542085  114677 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0115 11:37:21.545446  114677 start.go:543] Will wait 60s for crictl version
	I0115 11:37:21.545495  114677 ssh_runner.go:195] Run: which crictl
	I0115 11:37:21.548640  114677 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0115 11:37:21.581251  114677 start.go:559] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.26
	RuntimeApiVersion:  v1
	I0115 11:37:21.581325  114677 ssh_runner.go:195] Run: containerd --version
	I0115 11:37:21.606267  114677 ssh_runner.go:195] Run: containerd --version
	I0115 11:37:21.631973  114677 out.go:177] * Preparing Kubernetes v1.28.4 on containerd 1.6.26 ...
	I0115 11:37:21.633551  114677 cli_runner.go:164] Run: docker network inspect addons-391328 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0115 11:37:21.650618  114677 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0115 11:37:21.654089  114677 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 11:37:21.663876  114677 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0115 11:37:21.663961  114677 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 11:37:21.695770  114677 containerd.go:612] all images are preloaded for containerd runtime.
	I0115 11:37:21.695796  114677 containerd.go:519] Images already preloaded, skipping extraction
	I0115 11:37:21.695854  114677 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 11:37:21.726751  114677 containerd.go:612] all images are preloaded for containerd runtime.
	I0115 11:37:21.726778  114677 cache_images.go:84] Images are preloaded, skipping loading
	I0115 11:37:21.726838  114677 ssh_runner.go:195] Run: sudo crictl info
	I0115 11:37:21.757815  114677 cni.go:84] Creating CNI manager for ""
	I0115 11:37:21.757838  114677 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0115 11:37:21.757856  114677 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0115 11:37:21.757880  114677 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-391328 NodeName:addons-391328 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0115 11:37:21.757997  114677 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-391328"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0115 11:37:21.758085  114677 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=addons-391328 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-391328 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0115 11:37:21.758132  114677 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0115 11:37:21.765994  114677 binaries.go:44] Found k8s binaries, skipping transfer
	I0115 11:37:21.766050  114677 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0115 11:37:21.773810  114677 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (385 bytes)
	I0115 11:37:21.789233  114677 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0115 11:37:21.804384  114677 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I0115 11:37:21.819484  114677 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0115 11:37:21.822533  114677 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 11:37:21.832130  114677 certs.go:56] Setting up /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/addons-391328 for IP: 192.168.49.2
	I0115 11:37:21.832173  114677 certs.go:190] acquiring lock for shared ca certs: {Name:mk96f3644c4e7ee69638a4ef92775561fb45c989 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 11:37:21.832297  114677 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17957-106484/.minikube/ca.key
	I0115 11:37:21.909059  114677 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17957-106484/.minikube/ca.crt ...
	I0115 11:37:21.909090  114677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17957-106484/.minikube/ca.crt: {Name:mk4f8a65b15a9f8b0e0b67a4da82c566e7685228 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 11:37:21.909306  114677 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17957-106484/.minikube/ca.key ...
	I0115 11:37:21.909321  114677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17957-106484/.minikube/ca.key: {Name:mk606b6e6f3c160cf4f97f50434c6eeab8fcc31e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 11:37:21.909423  114677 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17957-106484/.minikube/proxy-client-ca.key
	I0115 11:37:21.988723  114677 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17957-106484/.minikube/proxy-client-ca.crt ...
	I0115 11:37:21.988764  114677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17957-106484/.minikube/proxy-client-ca.crt: {Name:mk01ba2ef4fff024d1d4221aa38666aacf415c59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 11:37:21.988975  114677 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17957-106484/.minikube/proxy-client-ca.key ...
	I0115 11:37:21.988992  114677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17957-106484/.minikube/proxy-client-ca.key: {Name:mkd6918e78419bc945c7534f353f2f8203654acc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 11:37:21.989149  114677 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/addons-391328/client.key
	I0115 11:37:21.989167  114677 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/addons-391328/client.crt with IP's: []
	I0115 11:37:22.226704  114677 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/addons-391328/client.crt ...
	I0115 11:37:22.226741  114677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/addons-391328/client.crt: {Name:mk47e3abedcd16590e26351798a7e7206f2deb51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 11:37:22.226947  114677 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/addons-391328/client.key ...
	I0115 11:37:22.226969  114677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/addons-391328/client.key: {Name:mk54f287600092da93b5a03dda5511ad5dbc176f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 11:37:22.227071  114677 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/addons-391328/apiserver.key.dd3b5fb2
	I0115 11:37:22.227094  114677 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/addons-391328/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0115 11:37:22.312284  114677 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/addons-391328/apiserver.crt.dd3b5fb2 ...
	I0115 11:37:22.312321  114677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/addons-391328/apiserver.crt.dd3b5fb2: {Name:mk8684783a0676b7668072d74e638626e657854b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 11:37:22.312513  114677 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/addons-391328/apiserver.key.dd3b5fb2 ...
	I0115 11:37:22.312532  114677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/addons-391328/apiserver.key.dd3b5fb2: {Name:mk7412edb60ee7eae6a1765cf5bcd370b577597f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 11:37:22.312630  114677 certs.go:337] copying /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/addons-391328/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/addons-391328/apiserver.crt
	I0115 11:37:22.312753  114677 certs.go:341] copying /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/addons-391328/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/addons-391328/apiserver.key
	I0115 11:37:22.312828  114677 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/addons-391328/proxy-client.key
	I0115 11:37:22.312854  114677 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/addons-391328/proxy-client.crt with IP's: []
	I0115 11:37:22.377034  114677 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/addons-391328/proxy-client.crt ...
	I0115 11:37:22.377069  114677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/addons-391328/proxy-client.crt: {Name:mk2848c6a895aba7be8adcd030d0d76cae6b0f60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 11:37:22.377254  114677 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/addons-391328/proxy-client.key ...
	I0115 11:37:22.377270  114677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/addons-391328/proxy-client.key: {Name:mk56cafb6a8016480f046683ebc8456b0fdc5570 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 11:37:22.377461  114677 certs.go:437] found cert: /home/jenkins/minikube-integration/17957-106484/.minikube/certs/home/jenkins/minikube-integration/17957-106484/.minikube/certs/ca-key.pem (1679 bytes)
	I0115 11:37:22.377513  114677 certs.go:437] found cert: /home/jenkins/minikube-integration/17957-106484/.minikube/certs/home/jenkins/minikube-integration/17957-106484/.minikube/certs/ca.pem (1082 bytes)
	I0115 11:37:22.377548  114677 certs.go:437] found cert: /home/jenkins/minikube-integration/17957-106484/.minikube/certs/home/jenkins/minikube-integration/17957-106484/.minikube/certs/cert.pem (1123 bytes)
	I0115 11:37:22.377597  114677 certs.go:437] found cert: /home/jenkins/minikube-integration/17957-106484/.minikube/certs/home/jenkins/minikube-integration/17957-106484/.minikube/certs/key.pem (1675 bytes)
	I0115 11:37:22.378327  114677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/addons-391328/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0115 11:37:22.400397  114677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/addons-391328/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0115 11:37:22.421421  114677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/addons-391328/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0115 11:37:22.441821  114677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/addons-391328/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0115 11:37:22.462705  114677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-106484/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0115 11:37:22.483485  114677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-106484/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0115 11:37:22.504267  114677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-106484/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0115 11:37:22.524551  114677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-106484/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0115 11:37:22.546160  114677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-106484/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0115 11:37:22.566533  114677 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0115 11:37:22.581776  114677 ssh_runner.go:195] Run: openssl version
	I0115 11:37:22.586649  114677 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0115 11:37:22.594651  114677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0115 11:37:22.597622  114677 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 15 11:37 /usr/share/ca-certificates/minikubeCA.pem
	I0115 11:37:22.597686  114677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0115 11:37:22.603642  114677 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0115 11:37:22.611496  114677 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0115 11:37:22.614244  114677 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0115 11:37:22.614289  114677 kubeadm.go:404] StartCluster: {Name:addons-391328 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-391328 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 11:37:22.614361  114677 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0115 11:37:22.614412  114677 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 11:37:22.645835  114677 cri.go:89] found id: ""
	I0115 11:37:22.645897  114677 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0115 11:37:22.653841  114677 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0115 11:37:22.661340  114677 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0115 11:37:22.661385  114677 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0115 11:37:22.668543  114677 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0115 11:37:22.668588  114677 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0115 11:37:22.747417  114677 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1048-gcp\n", err: exit status 1
	I0115 11:37:22.805936  114677 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0115 11:37:31.591146  114677 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0115 11:37:31.591255  114677 kubeadm.go:322] [preflight] Running pre-flight checks
	I0115 11:37:31.591384  114677 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0115 11:37:31.591474  114677 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1048-gcp
	I0115 11:37:31.591525  114677 kubeadm.go:322] OS: Linux
	I0115 11:37:31.591599  114677 kubeadm.go:322] CGROUPS_CPU: enabled
	I0115 11:37:31.591672  114677 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0115 11:37:31.591757  114677 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0115 11:37:31.591848  114677 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0115 11:37:31.591926  114677 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0115 11:37:31.591991  114677 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0115 11:37:31.592056  114677 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0115 11:37:31.592102  114677 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0115 11:37:31.592142  114677 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0115 11:37:31.592285  114677 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0115 11:37:31.592420  114677 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0115 11:37:31.592555  114677 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0115 11:37:31.592644  114677 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0115 11:37:31.594319  114677 out.go:204]   - Generating certificates and keys ...
	I0115 11:37:31.594398  114677 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0115 11:37:31.594477  114677 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0115 11:37:31.594556  114677 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0115 11:37:31.594626  114677 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0115 11:37:31.594710  114677 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0115 11:37:31.594780  114677 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0115 11:37:31.594826  114677 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0115 11:37:31.594927  114677 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-391328 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0115 11:37:31.594980  114677 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0115 11:37:31.595104  114677 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-391328 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0115 11:37:31.595186  114677 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0115 11:37:31.595265  114677 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0115 11:37:31.595344  114677 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0115 11:37:31.595422  114677 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0115 11:37:31.595508  114677 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0115 11:37:31.595579  114677 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0115 11:37:31.595689  114677 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0115 11:37:31.595765  114677 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0115 11:37:31.595877  114677 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0115 11:37:31.595966  114677 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0115 11:37:31.597520  114677 out.go:204]   - Booting up control plane ...
	I0115 11:37:31.597625  114677 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0115 11:37:31.597713  114677 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0115 11:37:31.597796  114677 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0115 11:37:31.597948  114677 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0115 11:37:31.598069  114677 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0115 11:37:31.598131  114677 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0115 11:37:31.598304  114677 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0115 11:37:31.598412  114677 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.002093 seconds
	I0115 11:37:31.598556  114677 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0115 11:37:31.598725  114677 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0115 11:37:31.598871  114677 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0115 11:37:31.599077  114677 kubeadm.go:322] [mark-control-plane] Marking the node addons-391328 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0115 11:37:31.599159  114677 kubeadm.go:322] [bootstrap-token] Using token: k6j95c.n85t1lqmttlkgwny
	I0115 11:37:31.600579  114677 out.go:204]   - Configuring RBAC rules ...
	I0115 11:37:31.600701  114677 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0115 11:37:31.600832  114677 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0115 11:37:31.600955  114677 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0115 11:37:31.601086  114677 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0115 11:37:31.601214  114677 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0115 11:37:31.601325  114677 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0115 11:37:31.601442  114677 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0115 11:37:31.601480  114677 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0115 11:37:31.601529  114677 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0115 11:37:31.601539  114677 kubeadm.go:322] 
	I0115 11:37:31.601611  114677 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0115 11:37:31.601626  114677 kubeadm.go:322] 
	I0115 11:37:31.601688  114677 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0115 11:37:31.601697  114677 kubeadm.go:322] 
	I0115 11:37:31.601717  114677 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0115 11:37:31.601778  114677 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0115 11:37:31.601844  114677 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0115 11:37:31.601852  114677 kubeadm.go:322] 
	I0115 11:37:31.601908  114677 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0115 11:37:31.601917  114677 kubeadm.go:322] 
	I0115 11:37:31.601971  114677 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0115 11:37:31.601980  114677 kubeadm.go:322] 
	I0115 11:37:31.602036  114677 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0115 11:37:31.602142  114677 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0115 11:37:31.602248  114677 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0115 11:37:31.602262  114677 kubeadm.go:322] 
	I0115 11:37:31.602383  114677 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0115 11:37:31.602467  114677 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0115 11:37:31.602473  114677 kubeadm.go:322] 
	I0115 11:37:31.602541  114677 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token k6j95c.n85t1lqmttlkgwny \
	I0115 11:37:31.602653  114677 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:57f04b5a60a80bf91a5bbdb479b5c23adc032d47b377aec7490aa146b9ad04e8 \
	I0115 11:37:31.602686  114677 kubeadm.go:322] 	--control-plane 
	I0115 11:37:31.602692  114677 kubeadm.go:322] 
	I0115 11:37:31.602775  114677 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0115 11:37:31.602781  114677 kubeadm.go:322] 
	I0115 11:37:31.602877  114677 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token k6j95c.n85t1lqmttlkgwny \
	I0115 11:37:31.603024  114677 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:57f04b5a60a80bf91a5bbdb479b5c23adc032d47b377aec7490aa146b9ad04e8 
	I0115 11:37:31.603040  114677 cni.go:84] Creating CNI manager for ""
	I0115 11:37:31.603047  114677 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0115 11:37:31.604634  114677 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0115 11:37:31.606028  114677 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0115 11:37:31.609802  114677 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0115 11:37:31.609823  114677 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0115 11:37:31.648233  114677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0115 11:37:32.303156  114677 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0115 11:37:32.303268  114677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:37:32.303310  114677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=71cf7d00913f789829bf5813c1d11b9a83eda53e minikube.k8s.io/name=addons-391328 minikube.k8s.io/updated_at=2024_01_15T11_37_32_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:37:32.386199  114677 ops.go:34] apiserver oom_adj: -16
	I0115 11:37:32.386350  114677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:37:32.886681  114677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:37:33.386723  114677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:37:33.886439  114677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:37:34.386757  114677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:37:34.886792  114677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:37:35.386749  114677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:37:35.887219  114677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:37:36.386769  114677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:37:36.886426  114677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:37:37.387356  114677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:37:37.886925  114677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:37:38.387045  114677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:37:38.886554  114677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:37:39.386411  114677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:37:39.886974  114677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:37:40.387217  114677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:37:40.887134  114677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:37:41.387171  114677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:37:41.886921  114677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:37:42.387298  114677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:37:42.886649  114677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:37:43.387236  114677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:37:43.887198  114677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:37:44.386415  114677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:37:44.452328  114677 kubeadm.go:1088] duration metric: took 12.149123028s to wait for elevateKubeSystemPrivileges.
	I0115 11:37:44.452366  114677 kubeadm.go:406] StartCluster complete in 21.838080998s
	I0115 11:37:44.452385  114677 settings.go:142] acquiring lock: {Name:mkb27abb0eb10edb69ceed6d6dd7b587161f3547 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 11:37:44.452510  114677 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17957-106484/kubeconfig
	I0115 11:37:44.453011  114677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17957-106484/kubeconfig: {Name:mk32e1758294a50a99d31d9815d4e7a125a87b6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 11:37:44.453291  114677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0115 11:37:44.453389  114677 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0115 11:37:44.453504  114677 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-391328"
	I0115 11:37:44.453519  114677 addons.go:69] Setting default-storageclass=true in profile "addons-391328"
	I0115 11:37:44.453525  114677 addons.go:69] Setting metrics-server=true in profile "addons-391328"
	I0115 11:37:44.453540  114677 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-391328"
	I0115 11:37:44.453532  114677 addons.go:69] Setting registry=true in profile "addons-391328"
	I0115 11:37:44.453557  114677 addons.go:234] Setting addon metrics-server=true in "addons-391328"
	I0115 11:37:44.453558  114677 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-391328"
	I0115 11:37:44.453569  114677 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-391328"
	I0115 11:37:44.453562  114677 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-391328"
	I0115 11:37:44.453578  114677 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-391328"
	I0115 11:37:44.453585  114677 addons.go:234] Setting addon registry=true in "addons-391328"
	I0115 11:37:44.453588  114677 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-391328"
	I0115 11:37:44.453610  114677 host.go:66] Checking if "addons-391328" exists ...
	I0115 11:37:44.453605  114677 addons.go:69] Setting inspektor-gadget=true in profile "addons-391328"
	I0115 11:37:44.453622  114677 host.go:66] Checking if "addons-391328" exists ...
	I0115 11:37:44.453629  114677 host.go:66] Checking if "addons-391328" exists ...
	I0115 11:37:44.453632  114677 addons.go:234] Setting addon inspektor-gadget=true in "addons-391328"
	I0115 11:37:44.453636  114677 host.go:66] Checking if "addons-391328" exists ...
	I0115 11:37:44.453694  114677 host.go:66] Checking if "addons-391328" exists ...
	I0115 11:37:44.453932  114677 cli_runner.go:164] Run: docker container inspect addons-391328 --format={{.State.Status}}
	I0115 11:37:44.453943  114677 addons.go:69] Setting volumesnapshots=true in profile "addons-391328"
	I0115 11:37:44.453957  114677 addons.go:234] Setting addon volumesnapshots=true in "addons-391328"
	I0115 11:37:44.453991  114677 host.go:66] Checking if "addons-391328" exists ...
	I0115 11:37:44.454101  114677 cli_runner.go:164] Run: docker container inspect addons-391328 --format={{.State.Status}}
	I0115 11:37:44.454106  114677 cli_runner.go:164] Run: docker container inspect addons-391328 --format={{.State.Status}}
	I0115 11:37:44.454112  114677 cli_runner.go:164] Run: docker container inspect addons-391328 --format={{.State.Status}}
	I0115 11:37:44.454116  114677 cli_runner.go:164] Run: docker container inspect addons-391328 --format={{.State.Status}}
	I0115 11:37:44.454132  114677 cli_runner.go:164] Run: docker container inspect addons-391328 --format={{.State.Status}}
	I0115 11:37:44.453932  114677 cli_runner.go:164] Run: docker container inspect addons-391328 --format={{.State.Status}}
	I0115 11:37:44.454440  114677 cli_runner.go:164] Run: docker container inspect addons-391328 --format={{.State.Status}}
	I0115 11:37:44.453506  114677 config.go:182] Loaded profile config "addons-391328": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 11:37:44.455146  114677 addons.go:69] Setting cloud-spanner=true in profile "addons-391328"
	I0115 11:37:44.455178  114677 addons.go:234] Setting addon cloud-spanner=true in "addons-391328"
	I0115 11:37:44.455219  114677 host.go:66] Checking if "addons-391328" exists ...
	I0115 11:37:44.455677  114677 cli_runner.go:164] Run: docker container inspect addons-391328 --format={{.State.Status}}
	I0115 11:37:44.453533  114677 addons.go:69] Setting storage-provisioner=true in profile "addons-391328"
	I0115 11:37:44.455849  114677 addons.go:234] Setting addon storage-provisioner=true in "addons-391328"
	I0115 11:37:44.455907  114677 host.go:66] Checking if "addons-391328" exists ...
	I0115 11:37:44.455930  114677 addons.go:69] Setting gcp-auth=true in profile "addons-391328"
	I0115 11:37:44.455971  114677 mustload.go:65] Loading cluster: addons-391328
	I0115 11:37:44.456266  114677 config.go:182] Loaded profile config "addons-391328": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 11:37:44.456413  114677 cli_runner.go:164] Run: docker container inspect addons-391328 --format={{.State.Status}}
	I0115 11:37:44.456558  114677 cli_runner.go:164] Run: docker container inspect addons-391328 --format={{.State.Status}}
	I0115 11:37:44.456603  114677 addons.go:69] Setting ingress=true in profile "addons-391328"
	I0115 11:37:44.456627  114677 addons.go:234] Setting addon ingress=true in "addons-391328"
	I0115 11:37:44.456686  114677 host.go:66] Checking if "addons-391328" exists ...
	I0115 11:37:44.453508  114677 addons.go:69] Setting yakd=true in profile "addons-391328"
	I0115 11:37:44.456812  114677 addons.go:234] Setting addon yakd=true in "addons-391328"
	I0115 11:37:44.456851  114677 host.go:66] Checking if "addons-391328" exists ...
	I0115 11:37:44.457173  114677 cli_runner.go:164] Run: docker container inspect addons-391328 --format={{.State.Status}}
	I0115 11:37:44.455923  114677 addons.go:69] Setting helm-tiller=true in profile "addons-391328"
	I0115 11:37:44.457426  114677 addons.go:234] Setting addon helm-tiller=true in "addons-391328"
	I0115 11:37:44.457448  114677 cli_runner.go:164] Run: docker container inspect addons-391328 --format={{.State.Status}}
	I0115 11:37:44.457515  114677 host.go:66] Checking if "addons-391328" exists ...
	I0115 11:37:44.453510  114677 addons.go:69] Setting ingress-dns=true in profile "addons-391328"
	I0115 11:37:44.460978  114677 addons.go:234] Setting addon ingress-dns=true in "addons-391328"
	I0115 11:37:44.461039  114677 host.go:66] Checking if "addons-391328" exists ...
	I0115 11:37:44.461490  114677 cli_runner.go:164] Run: docker container inspect addons-391328 --format={{.State.Status}}
	I0115 11:37:44.493348  114677 addons.go:234] Setting addon default-storageclass=true in "addons-391328"
	I0115 11:37:44.493399  114677 host.go:66] Checking if "addons-391328" exists ...
	I0115 11:37:44.493888  114677 cli_runner.go:164] Run: docker container inspect addons-391328 --format={{.State.Status}}
	I0115 11:37:44.498623  114677 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0115 11:37:44.500077  114677 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0115 11:37:44.500097  114677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0115 11:37:44.500181  114677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391328
	I0115 11:37:44.502347  114677 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.13
	I0115 11:37:44.498292  114677 cli_runner.go:164] Run: docker container inspect addons-391328 --format={{.State.Status}}
	I0115 11:37:44.506279  114677 host.go:66] Checking if "addons-391328" exists ...
	I0115 11:37:44.508118  114677 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0115 11:37:44.510139  114677 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 11:37:44.511638  114677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0115 11:37:44.512406  114677 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 11:37:44.512417  114677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0115 11:37:44.512455  114677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391328
	I0115 11:37:44.510263  114677 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I0115 11:37:44.514104  114677 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0115 11:37:44.514124  114677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0115 11:37:44.514173  114677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391328
	I0115 11:37:44.512455  114677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391328
	I0115 11:37:44.517555  114677 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-391328"
	I0115 11:37:44.517602  114677 host.go:66] Checking if "addons-391328" exists ...
	I0115 11:37:44.519830  114677 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I0115 11:37:44.510210  114677 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0115 11:37:44.511656  114677 out.go:177]   - Using image docker.io/registry:2.8.3
	I0115 11:37:44.518293  114677 cli_runner.go:164] Run: docker container inspect addons-391328 --format={{.State.Status}}
	I0115 11:37:44.521391  114677 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0115 11:37:44.524661  114677 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0115 11:37:44.522875  114677 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0115 11:37:44.523020  114677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0115 11:37:44.526098  114677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391328
	I0115 11:37:44.527406  114677 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0115 11:37:44.527519  114677 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0115 11:37:44.528796  114677 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0115 11:37:44.528812  114677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0115 11:37:44.528818  114677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0115 11:37:44.528868  114677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391328
	I0115 11:37:44.528891  114677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391328
	I0115 11:37:44.527434  114677 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0115 11:37:44.530724  114677 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0115 11:37:44.532015  114677 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0115 11:37:44.533409  114677 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0115 11:37:44.532468  114677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17957-106484/.minikube/machines/addons-391328/id_rsa Username:docker}
	I0115 11:37:44.537135  114677 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0115 11:37:44.543210  114677 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0115 11:37:44.545083  114677 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0115 11:37:44.545106  114677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0115 11:37:44.545168  114677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391328
	I0115 11:37:44.550750  114677 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0115 11:37:44.549704  114677 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0115 11:37:44.557390  114677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0115 11:37:44.558924  114677 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0115 11:37:44.557469  114677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391328
	I0115 11:37:44.557475  114677 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0115 11:37:44.565307  114677 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0115 11:37:44.565327  114677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0115 11:37:44.565390  114677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391328
	I0115 11:37:44.567263  114677 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.5
	I0115 11:37:44.570171  114677 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0115 11:37:44.572966  114677 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0115 11:37:44.572986  114677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0115 11:37:44.570142  114677 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0115 11:37:44.573016  114677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I0115 11:37:44.573041  114677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391328
	I0115 11:37:44.573067  114677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391328
	I0115 11:37:44.572721  114677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17957-106484/.minikube/machines/addons-391328/id_rsa Username:docker}
	I0115 11:37:44.578784  114677 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0115 11:37:44.580456  114677 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0115 11:37:44.580477  114677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0115 11:37:44.580536  114677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391328
	I0115 11:37:44.578452  114677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17957-106484/.minikube/machines/addons-391328/id_rsa Username:docker}
	I0115 11:37:44.587841  114677 out.go:177]   - Using image docker.io/busybox:stable
	I0115 11:37:44.586141  114677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17957-106484/.minikube/machines/addons-391328/id_rsa Username:docker}
	I0115 11:37:44.587022  114677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17957-106484/.minikube/machines/addons-391328/id_rsa Username:docker}
	I0115 11:37:44.590513  114677 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0115 11:37:44.592190  114677 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0115 11:37:44.592215  114677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0115 11:37:44.592288  114677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391328
	I0115 11:37:44.601638  114677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17957-106484/.minikube/machines/addons-391328/id_rsa Username:docker}
	I0115 11:37:44.609344  114677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17957-106484/.minikube/machines/addons-391328/id_rsa Username:docker}
	I0115 11:37:44.611045  114677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17957-106484/.minikube/machines/addons-391328/id_rsa Username:docker}
	I0115 11:37:44.619165  114677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17957-106484/.minikube/machines/addons-391328/id_rsa Username:docker}
	I0115 11:37:44.620392  114677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17957-106484/.minikube/machines/addons-391328/id_rsa Username:docker}
	I0115 11:37:44.620604  114677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17957-106484/.minikube/machines/addons-391328/id_rsa Username:docker}
	I0115 11:37:44.621354  114677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17957-106484/.minikube/machines/addons-391328/id_rsa Username:docker}
	I0115 11:37:44.622567  114677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17957-106484/.minikube/machines/addons-391328/id_rsa Username:docker}
	I0115 11:37:44.633929  114677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17957-106484/.minikube/machines/addons-391328/id_rsa Username:docker}
	W0115 11:37:44.640556  114677 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0115 11:37:44.640589  114677 retry.go:31] will retry after 346.264462ms: ssh: handshake failed: EOF
	I0115 11:37:44.663318  114677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0115 11:37:44.841282  114677 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0115 11:37:44.841309  114677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0115 11:37:44.952767  114677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 11:37:44.954219  114677 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0115 11:37:44.954306  114677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0115 11:37:44.959323  114677 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-391328" context rescaled to 1 replicas
	I0115 11:37:44.959420  114677 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0115 11:37:44.961382  114677 out.go:177] * Verifying Kubernetes components...
	I0115 11:37:44.963177  114677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 11:37:44.962995  114677 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0115 11:37:44.963349  114677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0115 11:37:44.963581  114677 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0115 11:37:44.963596  114677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0115 11:37:45.055408  114677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0115 11:37:45.143704  114677 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0115 11:37:45.143737  114677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0115 11:37:45.145147  114677 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0115 11:37:45.145175  114677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0115 11:37:45.146082  114677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0115 11:37:45.153542  114677 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0115 11:37:45.153580  114677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0115 11:37:45.158937  114677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0115 11:37:45.159153  114677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0115 11:37:45.160644  114677 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0115 11:37:45.160665  114677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0115 11:37:45.246406  114677 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0115 11:37:45.246487  114677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0115 11:37:45.252407  114677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0115 11:37:45.255363  114677 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0115 11:37:45.255455  114677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0115 11:37:45.360258  114677 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0115 11:37:45.360284  114677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0115 11:37:45.438159  114677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0115 11:37:45.438475  114677 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0115 11:37:45.438513  114677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0115 11:37:45.452906  114677 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0115 11:37:45.452999  114677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0115 11:37:45.539254  114677 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0115 11:37:45.539350  114677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0115 11:37:45.643376  114677 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0115 11:37:45.643463  114677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0115 11:37:45.647210  114677 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0115 11:37:45.647288  114677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0115 11:37:45.745132  114677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0115 11:37:45.745735  114677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0115 11:37:45.760024  114677 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0115 11:37:45.760057  114677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0115 11:37:45.848839  114677 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0115 11:37:45.848890  114677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0115 11:37:45.859387  114677 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0115 11:37:45.859429  114677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0115 11:37:45.947709  114677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0115 11:37:45.954043  114677 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0115 11:37:45.954128  114677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0115 11:37:46.149854  114677 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0115 11:37:46.149943  114677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0115 11:37:46.157091  114677 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0115 11:37:46.157181  114677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0115 11:37:46.339171  114677 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0115 11:37:46.339286  114677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0115 11:37:46.453034  114677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0115 11:37:46.640627  114677 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0115 11:37:46.640733  114677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0115 11:37:46.657152  114677 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.993790437s)
	I0115 11:37:46.657240  114677 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0115 11:37:46.841922  114677 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0115 11:37:46.841957  114677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0115 11:37:46.846226  114677 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0115 11:37:46.846254  114677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0115 11:37:46.958292  114677 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0115 11:37:46.958384  114677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0115 11:37:47.249440  114677 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0115 11:37:47.249471  114677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0115 11:37:47.252758  114677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0115 11:37:47.346215  114677 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0115 11:37:47.346250  114677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0115 11:37:47.836960  114677 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0115 11:37:47.837006  114677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0115 11:37:48.241590  114677 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0115 11:37:48.241637  114677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0115 11:37:48.538904  114677 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.586032101s)
	I0115 11:37:48.538989  114677 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (3.575745937s)
	I0115 11:37:48.540448  114677 node_ready.go:35] waiting up to 6m0s for node "addons-391328" to be "Ready" ...
	I0115 11:37:48.547682  114677 node_ready.go:49] node "addons-391328" has status "Ready":"True"
	I0115 11:37:48.547713  114677 node_ready.go:38] duration metric: took 7.209647ms waiting for node "addons-391328" to be "Ready" ...
	I0115 11:37:48.547726  114677 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 11:37:48.559616  114677 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-z8g2n" in "kube-system" namespace to be "Ready" ...
	I0115 11:37:48.653013  114677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0115 11:37:48.754646  114677 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0115 11:37:48.754681  114677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0115 11:37:49.239645  114677 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0115 11:37:49.239741  114677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0115 11:37:49.660287  114677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0115 11:37:50.654206  114677 pod_ready.go:102] pod "coredns-5dd5756b68-z8g2n" in "kube-system" namespace has status "Ready":"False"
	I0115 11:37:51.344207  114677 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0115 11:37:51.344417  114677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391328
	I0115 11:37:51.368359  114677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17957-106484/.minikube/machines/addons-391328/id_rsa Username:docker}
	I0115 11:37:52.037289  114677 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0115 11:37:52.148760  114677 addons.go:234] Setting addon gcp-auth=true in "addons-391328"
	I0115 11:37:52.148829  114677 host.go:66] Checking if "addons-391328" exists ...
	I0115 11:37:52.149333  114677 cli_runner.go:164] Run: docker container inspect addons-391328 --format={{.State.Status}}
	I0115 11:37:52.171274  114677 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0115 11:37:52.171334  114677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391328
	I0115 11:37:52.193669  114677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17957-106484/.minikube/machines/addons-391328/id_rsa Username:docker}
	I0115 11:37:52.751012  114677 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.695561485s)
	I0115 11:37:52.751055  114677 addons.go:470] Verifying addon ingress=true in "addons-391328"
	I0115 11:37:52.751087  114677 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.604970048s)
	I0115 11:37:52.752626  114677 out.go:177] * Verifying ingress addon...
	I0115 11:37:52.751134  114677 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.591936619s)
	I0115 11:37:52.751152  114677 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.592188498s)
	I0115 11:37:52.751199  114677 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.498709708s)
	I0115 11:37:52.751294  114677 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.313039257s)
	I0115 11:37:52.751351  114677 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.005581119s)
	I0115 11:37:52.751424  114677 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.006254995s)
	I0115 11:37:52.751468  114677 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.803670728s)
	I0115 11:37:52.751531  114677 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.298458664s)
	I0115 11:37:52.751634  114677 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.498791124s)
	I0115 11:37:52.751718  114677 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.098665346s)
	I0115 11:37:52.752714  114677 addons.go:470] Verifying addon metrics-server=true in "addons-391328"
	I0115 11:37:52.752730  114677 addons.go:470] Verifying addon registry=true in "addons-391328"
	I0115 11:37:52.754391  114677 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-391328 service yakd-dashboard -n yakd-dashboard
	
	W0115 11:37:52.752784  114677 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0115 11:37:52.755344  114677 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0115 11:37:52.756487  114677 out.go:177] * Verifying registry addon...
	I0115 11:37:52.757271  114677 retry.go:31] will retry after 306.15755ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0115 11:37:52.758923  114677 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0115 11:37:52.764430  114677 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0115 11:37:52.764456  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0115 11:37:52.769430  114677 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0115 11:37:52.770578  114677 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0115 11:37:52.770637  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:37:53.065167  114677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0115 11:37:53.066262  114677 pod_ready.go:102] pod "coredns-5dd5756b68-z8g2n" in "kube-system" namespace has status "Ready":"False"
	I0115 11:37:53.261623  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:37:53.264434  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:37:53.761196  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:37:53.763919  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:37:54.261475  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:37:54.263803  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:37:54.763302  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:37:54.764560  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:37:54.765992  114677 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.105651323s)
	I0115 11:37:54.766041  114677 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-391328"
	I0115 11:37:54.766080  114677 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.594772791s)
	I0115 11:37:54.769780  114677 out.go:177] * Verifying csi-hostpath-driver addon...
	I0115 11:37:54.771395  114677 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0115 11:37:54.773028  114677 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0115 11:37:54.774556  114677 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0115 11:37:54.774570  114677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0115 11:37:54.772436  114677 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0115 11:37:54.842312  114677 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0115 11:37:54.842350  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:37:54.864015  114677 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0115 11:37:54.864057  114677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0115 11:37:54.949852  114677 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0115 11:37:54.949886  114677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I0115 11:37:54.972558  114677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0115 11:37:55.260484  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:37:55.263512  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:37:55.270249  114677 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.205031406s)
	I0115 11:37:55.280600  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:37:55.566495  114677 pod_ready.go:102] pod "coredns-5dd5756b68-z8g2n" in "kube-system" namespace has status "Ready":"False"
	I0115 11:37:55.760313  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:37:55.763094  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:37:55.780532  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:37:55.946559  114677 addons.go:470] Verifying addon gcp-auth=true in "addons-391328"
	I0115 11:37:55.948647  114677 out.go:177] * Verifying gcp-auth addon...
	I0115 11:37:55.951155  114677 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0115 11:37:55.954729  114677 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0115 11:37:55.954747  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:37:56.261557  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:37:56.264263  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:37:56.280446  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:37:56.454808  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:37:56.760410  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:37:56.763885  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:37:56.780759  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:37:56.958452  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:37:57.261415  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:37:57.264199  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:37:57.279293  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:37:57.454492  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:37:57.761254  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:37:57.763049  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:37:57.779818  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:37:57.955046  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:37:58.065484  114677 pod_ready.go:102] pod "coredns-5dd5756b68-z8g2n" in "kube-system" namespace has status "Ready":"False"
	I0115 11:37:58.261473  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:37:58.263515  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:37:58.279515  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:37:58.454847  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:37:58.761533  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:37:58.763976  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:37:58.780051  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:37:58.954416  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:37:59.261134  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:37:59.264544  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:37:59.280765  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:37:59.460382  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:37:59.761793  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:37:59.763587  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:37:59.780453  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:37:59.954763  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:00.067090  114677 pod_ready.go:102] pod "coredns-5dd5756b68-z8g2n" in "kube-system" namespace has status "Ready":"False"
	I0115 11:38:00.261373  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:00.263332  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:00.280246  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:00.454802  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:00.761976  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:00.764009  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:00.780360  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:00.954798  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:01.260990  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:01.263804  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:01.280716  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:01.455805  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:01.761208  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:01.763008  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:01.779365  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:01.954900  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:02.069325  114677 pod_ready.go:102] pod "coredns-5dd5756b68-z8g2n" in "kube-system" namespace has status "Ready":"False"
	I0115 11:38:02.260749  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:02.262601  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:02.279477  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:02.454782  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:02.761568  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:02.763646  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:02.780335  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:02.954420  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:03.260855  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:03.263064  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:03.279014  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:03.454503  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:03.760453  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:03.762732  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:03.780120  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:03.955195  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:04.260485  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:04.263755  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:04.281226  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:04.455182  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:04.565259  114677 pod_ready.go:102] pod "coredns-5dd5756b68-z8g2n" in "kube-system" namespace has status "Ready":"False"
	I0115 11:38:04.760960  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:04.763240  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:04.780589  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:04.954776  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:05.261037  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:05.263176  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:05.280360  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:05.455235  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:05.761336  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:05.763370  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:05.779630  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:05.954662  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:06.261407  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:06.263421  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:06.279498  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:06.454915  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:06.760790  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:06.762976  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:06.779908  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:06.956657  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:07.065747  114677 pod_ready.go:102] pod "coredns-5dd5756b68-z8g2n" in "kube-system" namespace has status "Ready":"False"
	I0115 11:38:07.260769  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:07.263303  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:07.279221  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:07.454708  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:07.761164  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:07.762922  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:07.779742  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:07.955290  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:08.261436  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:08.263414  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:08.280010  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:08.455070  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:08.760974  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:08.762920  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:08.780035  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:08.954898  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:09.066020  114677 pod_ready.go:102] pod "coredns-5dd5756b68-z8g2n" in "kube-system" namespace has status "Ready":"False"
	I0115 11:38:09.261152  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:09.263014  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:09.279230  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:09.454377  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:09.760694  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:09.762993  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:09.779920  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:09.954091  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:10.261523  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:10.263717  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:10.279818  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:10.455154  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:10.760874  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:10.763042  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:10.779466  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:10.954835  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:11.066068  114677 pod_ready.go:102] pod "coredns-5dd5756b68-z8g2n" in "kube-system" namespace has status "Ready":"False"
	I0115 11:38:11.260760  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:11.263285  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:11.279265  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:11.454791  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:11.760821  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:11.763290  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:11.779608  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:11.954990  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:12.261654  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:12.263691  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:12.279731  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:12.455349  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:12.760339  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:12.762856  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:12.779785  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:12.955316  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:13.260520  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:13.262624  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:13.279892  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:13.455144  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:13.565550  114677 pod_ready.go:102] pod "coredns-5dd5756b68-z8g2n" in "kube-system" namespace has status "Ready":"False"
	I0115 11:38:13.763434  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:13.768695  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:13.779452  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:13.954779  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:14.260439  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:14.263174  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:14.279535  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:14.454715  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:14.761032  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:14.764831  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:14.779693  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:14.955061  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:15.261181  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:15.263369  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:15.279327  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:15.454821  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:15.565977  114677 pod_ready.go:102] pod "coredns-5dd5756b68-z8g2n" in "kube-system" namespace has status "Ready":"False"
	I0115 11:38:15.760925  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:15.762947  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:15.780223  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:15.954702  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:16.261454  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:16.263142  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:16.279221  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:16.454774  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:16.760804  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:16.763083  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:16.779053  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:16.954703  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:17.260902  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:17.263521  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:17.279845  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:17.455148  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:17.566130  114677 pod_ready.go:102] pod "coredns-5dd5756b68-z8g2n" in "kube-system" namespace has status "Ready":"False"
	I0115 11:38:17.761797  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:17.763604  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:17.779529  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:17.954952  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:18.260937  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:18.262924  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:18.281059  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:18.455103  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:18.761398  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:18.763783  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:18.780331  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:18.954524  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:19.260600  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:19.267348  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:19.282532  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:19.454443  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:19.760393  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:19.762891  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:19.779973  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:19.954436  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:20.066236  114677 pod_ready.go:102] pod "coredns-5dd5756b68-z8g2n" in "kube-system" namespace has status "Ready":"False"
	I0115 11:38:20.261639  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:20.264002  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:20.280604  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:20.455458  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:20.761100  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:20.764363  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:20.780124  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:20.955108  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:21.261483  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:21.263997  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:21.281192  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:21.454541  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:21.761828  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:21.763505  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:21.779796  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:21.955668  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:22.066645  114677 pod_ready.go:102] pod "coredns-5dd5756b68-z8g2n" in "kube-system" namespace has status "Ready":"False"
	I0115 11:38:22.261420  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:22.264546  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:22.280516  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:22.455390  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:22.761757  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:22.763797  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:22.781059  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:22.955866  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:23.260936  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:23.264045  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:23.280251  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:23.454707  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:23.760285  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:23.763401  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:23.779256  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:23.954985  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:24.067284  114677 pod_ready.go:102] pod "coredns-5dd5756b68-z8g2n" in "kube-system" namespace has status "Ready":"False"
	I0115 11:38:24.261494  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:24.263874  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:24.281160  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:24.455435  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:24.761317  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:24.762944  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:24.780566  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:24.954558  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:25.260840  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:25.263873  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:25.280176  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:25.454685  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:25.762343  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:25.764432  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:25.781908  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:25.955335  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:26.261024  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:26.263561  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:26.279861  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:26.455849  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:26.568173  114677 pod_ready.go:102] pod "coredns-5dd5756b68-z8g2n" in "kube-system" namespace has status "Ready":"False"
	I0115 11:38:26.759833  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:26.765456  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:26.780567  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:26.956567  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:27.261080  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:27.263948  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:27.280796  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:27.455456  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:27.566070  114677 pod_ready.go:92] pod "coredns-5dd5756b68-z8g2n" in "kube-system" namespace has status "Ready":"True"
	I0115 11:38:27.566100  114677 pod_ready.go:81] duration metric: took 39.006448155s waiting for pod "coredns-5dd5756b68-z8g2n" in "kube-system" namespace to be "Ready" ...
	I0115 11:38:27.566114  114677 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-391328" in "kube-system" namespace to be "Ready" ...
	I0115 11:38:27.571774  114677 pod_ready.go:92] pod "etcd-addons-391328" in "kube-system" namespace has status "Ready":"True"
	I0115 11:38:27.571803  114677 pod_ready.go:81] duration metric: took 5.680574ms waiting for pod "etcd-addons-391328" in "kube-system" namespace to be "Ready" ...
	I0115 11:38:27.571820  114677 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-391328" in "kube-system" namespace to be "Ready" ...
	I0115 11:38:27.576415  114677 pod_ready.go:92] pod "kube-apiserver-addons-391328" in "kube-system" namespace has status "Ready":"True"
	I0115 11:38:27.576436  114677 pod_ready.go:81] duration metric: took 4.608927ms waiting for pod "kube-apiserver-addons-391328" in "kube-system" namespace to be "Ready" ...
	I0115 11:38:27.576445  114677 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-391328" in "kube-system" namespace to be "Ready" ...
	I0115 11:38:27.580890  114677 pod_ready.go:92] pod "kube-controller-manager-addons-391328" in "kube-system" namespace has status "Ready":"True"
	I0115 11:38:27.580912  114677 pod_ready.go:81] duration metric: took 4.460336ms waiting for pod "kube-controller-manager-addons-391328" in "kube-system" namespace to be "Ready" ...
	I0115 11:38:27.580922  114677 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9tqds" in "kube-system" namespace to be "Ready" ...
	I0115 11:38:27.587496  114677 pod_ready.go:92] pod "kube-proxy-9tqds" in "kube-system" namespace has status "Ready":"True"
	I0115 11:38:27.587516  114677 pod_ready.go:81] duration metric: took 6.588632ms waiting for pod "kube-proxy-9tqds" in "kube-system" namespace to be "Ready" ...
	I0115 11:38:27.587526  114677 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-391328" in "kube-system" namespace to be "Ready" ...
	I0115 11:38:27.761029  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:27.763381  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:27.779154  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:27.955429  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:27.964466  114677 pod_ready.go:92] pod "kube-scheduler-addons-391328" in "kube-system" namespace has status "Ready":"True"
	I0115 11:38:27.964492  114677 pod_ready.go:81] duration metric: took 376.960019ms waiting for pod "kube-scheduler-addons-391328" in "kube-system" namespace to be "Ready" ...
	I0115 11:38:27.964502  114677 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-bcwtk" in "kube-system" namespace to be "Ready" ...
	I0115 11:38:28.263220  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:28.264451  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:28.279251  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:28.365265  114677 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-bcwtk" in "kube-system" namespace has status "Ready":"True"
	I0115 11:38:28.365293  114677 pod_ready.go:81] duration metric: took 400.781932ms waiting for pod "nvidia-device-plugin-daemonset-bcwtk" in "kube-system" namespace to be "Ready" ...
	I0115 11:38:28.365308  114677 pod_ready.go:38] duration metric: took 39.817569251s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 11:38:28.365329  114677 api_server.go:52] waiting for apiserver process to appear ...
	I0115 11:38:28.365385  114677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 11:38:28.381521  114677 api_server.go:72] duration metric: took 43.422037892s to wait for apiserver process to appear ...
	I0115 11:38:28.381549  114677 api_server.go:88] waiting for apiserver healthz status ...
	I0115 11:38:28.381578  114677 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0115 11:38:28.387682  114677 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0115 11:38:28.388870  114677 api_server.go:141] control plane version: v1.28.4
	I0115 11:38:28.388895  114677 api_server.go:131] duration metric: took 7.339304ms to wait for apiserver health ...
	I0115 11:38:28.388903  114677 system_pods.go:43] waiting for kube-system pods to appear ...
	I0115 11:38:28.455210  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:28.572623  114677 system_pods.go:59] 19 kube-system pods found
	I0115 11:38:28.572659  114677 system_pods.go:61] "coredns-5dd5756b68-z8g2n" [dfbd2a3a-071a-41fd-ad01-c312449a2558] Running
	I0115 11:38:28.572672  114677 system_pods.go:61] "csi-hostpath-attacher-0" [0781f6ea-f3e6-4923-ad9a-38afe9569e13] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0115 11:38:28.572678  114677 system_pods.go:61] "csi-hostpath-resizer-0" [0b3ed967-13c7-45e6-a581-ad31c951a904] Running
	I0115 11:38:28.572690  114677 system_pods.go:61] "csi-hostpathplugin-2r2dn" [d8856a82-7cb5-47a9-8fa8-27bd94cd83dc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0115 11:38:28.572698  114677 system_pods.go:61] "etcd-addons-391328" [0ac11c5e-35f6-4fa0-8ccc-cd975f42b1bd] Running
	I0115 11:38:28.572705  114677 system_pods.go:61] "kindnet-jf42m" [4a15d88e-2803-4788-a1d0-1b1c561e34b7] Running
	I0115 11:38:28.572712  114677 system_pods.go:61] "kube-apiserver-addons-391328" [a6a9ca07-738f-460a-abc5-24c33d37e352] Running
	I0115 11:38:28.572719  114677 system_pods.go:61] "kube-controller-manager-addons-391328" [4198bdab-4ad8-4915-bdaa-6950d92cc2bf] Running
	I0115 11:38:28.572727  114677 system_pods.go:61] "kube-ingress-dns-minikube" [232add95-bcb2-4b7b-96ef-4ee27bf958c1] Running
	I0115 11:38:28.572735  114677 system_pods.go:61] "kube-proxy-9tqds" [711e1e81-b451-40b5-8023-cb6f0639b6d0] Running
	I0115 11:38:28.572752  114677 system_pods.go:61] "kube-scheduler-addons-391328" [29c24921-cf6b-4227-8cf4-ed7a84a537ee] Running
	I0115 11:38:28.572765  114677 system_pods.go:61] "metrics-server-7c66d45ddc-m2wd6" [f4ee5bd5-87e8-4e25-bcd1-f1937e734e0f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 11:38:28.572775  114677 system_pods.go:61] "nvidia-device-plugin-daemonset-bcwtk" [a24b6f26-d98a-4c84-8ca9-80c67aaa3202] Running
	I0115 11:38:28.572785  114677 system_pods.go:61] "registry-mwfqp" [2f7e31d1-e2b8-448a-be07-28fbd7de6478] Running
	I0115 11:38:28.572797  114677 system_pods.go:61] "registry-proxy-v8hx4" [2ea4d868-9428-475d-834a-0f2a89643232] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0115 11:38:28.572805  114677 system_pods.go:61] "snapshot-controller-58dbcc7b99-52ngx" [30564f65-f014-4c8a-9b2b-137ba5669983] Running
	I0115 11:38:28.572819  114677 system_pods.go:61] "snapshot-controller-58dbcc7b99-6p45q" [c35a036b-83ec-4b5f-914b-0e4b15cbd853] Running
	I0115 11:38:28.572827  114677 system_pods.go:61] "storage-provisioner" [fbd76747-3444-4ab0-8193-ee8107576dc4] Running
	I0115 11:38:28.572840  114677 system_pods.go:61] "tiller-deploy-7b677967b9-w8hrz" [3bdf97b7-56ee-499d-9908-6e07f69e36bc] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0115 11:38:28.572852  114677 system_pods.go:74] duration metric: took 183.942278ms to wait for pod list to return data ...
	I0115 11:38:28.572867  114677 default_sa.go:34] waiting for default service account to be created ...
	I0115 11:38:28.760831  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:28.762827  114677 default_sa.go:45] found service account: "default"
	I0115 11:38:28.762857  114677 default_sa.go:55] duration metric: took 189.978482ms for default service account to be created ...
	I0115 11:38:28.762868  114677 system_pods.go:116] waiting for k8s-apps to be running ...
	I0115 11:38:28.764000  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:28.779976  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:28.954823  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:28.986258  114677 system_pods.go:86] 19 kube-system pods found
	I0115 11:38:28.986293  114677 system_pods.go:89] "coredns-5dd5756b68-z8g2n" [dfbd2a3a-071a-41fd-ad01-c312449a2558] Running
	I0115 11:38:28.986307  114677 system_pods.go:89] "csi-hostpath-attacher-0" [0781f6ea-f3e6-4923-ad9a-38afe9569e13] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0115 11:38:28.986313  114677 system_pods.go:89] "csi-hostpath-resizer-0" [0b3ed967-13c7-45e6-a581-ad31c951a904] Running
	I0115 11:38:28.986326  114677 system_pods.go:89] "csi-hostpathplugin-2r2dn" [d8856a82-7cb5-47a9-8fa8-27bd94cd83dc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0115 11:38:28.986330  114677 system_pods.go:89] "etcd-addons-391328" [0ac11c5e-35f6-4fa0-8ccc-cd975f42b1bd] Running
	I0115 11:38:28.986336  114677 system_pods.go:89] "kindnet-jf42m" [4a15d88e-2803-4788-a1d0-1b1c561e34b7] Running
	I0115 11:38:28.986340  114677 system_pods.go:89] "kube-apiserver-addons-391328" [a6a9ca07-738f-460a-abc5-24c33d37e352] Running
	I0115 11:38:28.986344  114677 system_pods.go:89] "kube-controller-manager-addons-391328" [4198bdab-4ad8-4915-bdaa-6950d92cc2bf] Running
	I0115 11:38:28.986352  114677 system_pods.go:89] "kube-ingress-dns-minikube" [232add95-bcb2-4b7b-96ef-4ee27bf958c1] Running
	I0115 11:38:28.986357  114677 system_pods.go:89] "kube-proxy-9tqds" [711e1e81-b451-40b5-8023-cb6f0639b6d0] Running
	I0115 11:38:28.986364  114677 system_pods.go:89] "kube-scheduler-addons-391328" [29c24921-cf6b-4227-8cf4-ed7a84a537ee] Running
	I0115 11:38:28.986369  114677 system_pods.go:89] "metrics-server-7c66d45ddc-m2wd6" [f4ee5bd5-87e8-4e25-bcd1-f1937e734e0f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 11:38:28.986376  114677 system_pods.go:89] "nvidia-device-plugin-daemonset-bcwtk" [a24b6f26-d98a-4c84-8ca9-80c67aaa3202] Running
	I0115 11:38:28.986381  114677 system_pods.go:89] "registry-mwfqp" [2f7e31d1-e2b8-448a-be07-28fbd7de6478] Running
	I0115 11:38:28.986387  114677 system_pods.go:89] "registry-proxy-v8hx4" [2ea4d868-9428-475d-834a-0f2a89643232] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0115 11:38:28.986395  114677 system_pods.go:89] "snapshot-controller-58dbcc7b99-52ngx" [30564f65-f014-4c8a-9b2b-137ba5669983] Running
	I0115 11:38:28.986400  114677 system_pods.go:89] "snapshot-controller-58dbcc7b99-6p45q" [c35a036b-83ec-4b5f-914b-0e4b15cbd853] Running
	I0115 11:38:28.986405  114677 system_pods.go:89] "storage-provisioner" [fbd76747-3444-4ab0-8193-ee8107576dc4] Running
	I0115 11:38:28.986411  114677 system_pods.go:89] "tiller-deploy-7b677967b9-w8hrz" [3bdf97b7-56ee-499d-9908-6e07f69e36bc] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0115 11:38:28.986424  114677 system_pods.go:126] duration metric: took 223.549626ms to wait for k8s-apps to be running ...
	I0115 11:38:28.986432  114677 system_svc.go:44] waiting for kubelet service to be running ....
	I0115 11:38:28.986491  114677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 11:38:28.997849  114677 system_svc.go:56] duration metric: took 11.408224ms WaitForService to wait for kubelet.
	I0115 11:38:28.997874  114677 kubeadm.go:581] duration metric: took 44.038400008s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0115 11:38:28.997895  114677 node_conditions.go:102] verifying NodePressure condition ...
	I0115 11:38:29.164328  114677 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0115 11:38:29.164362  114677 node_conditions.go:123] node cpu capacity is 8
	I0115 11:38:29.164380  114677 node_conditions.go:105] duration metric: took 166.480253ms to run NodePressure ...
	I0115 11:38:29.164395  114677 start.go:228] waiting for startup goroutines ...
	I0115 11:38:29.260833  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:29.262855  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:29.280389  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:29.454776  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:29.760976  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:29.763363  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:29.780094  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:29.955852  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:30.261374  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:30.264632  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:30.279748  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:30.455466  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:30.761798  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:30.763608  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:30.780423  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:30.955147  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:31.261666  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:31.263524  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:31.279752  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:31.454920  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:31.760722  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:31.762771  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:31.780035  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:31.954519  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:32.260453  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:32.263306  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:32.279589  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:32.454874  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:32.761450  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:32.763557  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:32.781338  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:32.954714  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:33.260736  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:33.263996  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:33.281538  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:33.455111  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:33.761728  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:33.763935  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:33.782307  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:33.954472  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:34.262045  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:34.279500  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:34.282383  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:34.455365  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:34.762145  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:34.763638  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:34.780136  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:34.954390  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:35.261516  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:35.263703  114677 kapi.go:107] duration metric: took 42.504775247s to wait for kubernetes.io/minikube-addons=registry ...
	I0115 11:38:35.280245  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:35.454951  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:35.761372  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:35.779214  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:35.954944  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:36.261811  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:36.281055  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:36.455564  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:36.761199  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:36.780867  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:36.957687  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:37.261921  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:37.280602  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:37.454879  114677 kapi.go:107] duration metric: took 41.503721526s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0115 11:38:37.457071  114677 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-391328 cluster.
	I0115 11:38:37.458557  114677 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0115 11:38:37.460055  114677 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0115 11:38:37.760944  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:37.780423  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:38.260544  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:38.279760  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:38.761577  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:38.779912  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:39.262773  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:39.338749  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:39.761203  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:39.780689  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:40.260749  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:40.279429  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:40.760822  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:40.779950  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:41.272907  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:41.279881  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:41.761323  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:41.779757  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:42.261725  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:42.282574  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:42.761386  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:42.780286  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:43.261774  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:43.280220  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:43.760514  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:43.780003  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:44.261145  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:44.280299  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:44.760775  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:44.779644  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:45.260780  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:45.280025  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:45.761590  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:45.779881  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:46.262141  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:46.281069  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:46.761300  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:46.779276  114677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:47.260688  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:47.282125  114677 kapi.go:107] duration metric: took 52.509684644s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0115 11:38:47.761048  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:48.260731  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:48.761160  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:49.260966  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:49.760275  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:50.260746  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:50.760910  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:51.260710  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:51.761341  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:52.261144  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:52.763091  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:53.260634  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:53.761241  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:54.260333  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:54.761133  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:55.260855  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:55.761202  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:56.260506  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:56.761063  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:57.260852  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:57.760890  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:58.262048  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:58.760864  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:59.261755  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:59.761436  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:00.261256  114677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:00.760438  114677 kapi.go:107] duration metric: took 1m8.005093664s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0115 11:39:00.762436  114677 out.go:177] * Enabled addons: storage-provisioner, cloud-spanner, inspektor-gadget, helm-tiller, metrics-server, ingress-dns, nvidia-device-plugin, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I0115 11:39:00.763995  114677 addons.go:505] enable addons completed in 1m16.310610707s: enabled=[storage-provisioner cloud-spanner inspektor-gadget helm-tiller metrics-server ingress-dns nvidia-device-plugin yakd storage-provisioner-rancher volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I0115 11:39:00.764034  114677 start.go:233] waiting for cluster config update ...
	I0115 11:39:00.764053  114677 start.go:242] writing updated cluster config ...
	I0115 11:39:00.764323  114677 ssh_runner.go:195] Run: rm -f paused
	I0115 11:39:00.816906  114677 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0115 11:39:00.819029  114677 out.go:177] * Done! kubectl is now configured to use "addons-391328" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                         ATTEMPT             POD ID              POD
	a63ebbafd894c       dd1b12fcb6097       5 seconds ago        Running             hello-world-app              0                   885c116664879       hello-world-app-5d77478584-zwhzv
	1717788f30fca       529b5644c430c       15 seconds ago       Running             nginx                        0                   9f8f8fc911888       nginx
	f613a7adde48a       1ebff0f9671bc       55 seconds ago       Exited              patch                        2                   4afd16b7888b1       ingress-nginx-admission-patch-xlm5l
	2c16e1d91a95d       31de47c733c91       55 seconds ago       Running             yakd                         0                   a6a8bd78a094a       yakd-dashboard-9947fc6bf-rvrlr
	16abe334e05d4       6d2a98b274382       59 seconds ago       Running             gcp-auth                     0                   75181eb0d197f       gcp-auth-d4c87556c-lp7j6
	4040864de77fd       e16d1e3a10667       About a minute ago   Running             local-path-provisioner       0                   ee6857dcd5e32       local-path-provisioner-78b46b4d5c-ghr5p
	2862e7247c61c       ead0a4a53df89       About a minute ago   Running             coredns                      0                   1b084c51396b9       coredns-5dd5756b68-z8g2n
	f8f6e870037b0       aa61ee9c70bc4       About a minute ago   Running             volume-snapshot-controller   0                   4088539a763e7       snapshot-controller-58dbcc7b99-52ngx
	63bee67192da9       aa61ee9c70bc4       About a minute ago   Running             volume-snapshot-controller   0                   ba6593607bf04       snapshot-controller-58dbcc7b99-6p45q
	6848d7d748f0c       6e38f40d628db       About a minute ago   Running             storage-provisioner          0                   a8ede6d3a11ad       storage-provisioner
	526de9ff2cdbc       c7d1297425461       About a minute ago   Running             kindnet-cni                  0                   0f6a17fbb55e9       kindnet-jf42m
	295e88588854d       83f6cc407eed8       About a minute ago   Running             kube-proxy                   0                   72264fb9aebaa       kube-proxy-9tqds
	885ccee66b992       73deb9a3f7025       2 minutes ago        Running             etcd                         0                   79a0dcb2a8620       etcd-addons-391328
	1b78d94c40a3d       d058aa5ab969c       2 minutes ago        Running             kube-controller-manager      0                   9191613783b0d       kube-controller-manager-addons-391328
	ac5a448b73cd7       7fe0e6f37db33       2 minutes ago        Running             kube-apiserver               0                   1c5b564f1b3ee       kube-apiserver-addons-391328
	0818e33a39ae7       e3db313c6dbc0       2 minutes ago        Running             kube-scheduler               0                   b1bcce8fbeaae       kube-scheduler-addons-391328
	
	
	==> containerd <==
	Jan 15 11:39:35 addons-391328 containerd[780]: time="2024-01-15T11:39:35.636885105Z" level=error msg="ContainerStatus for \"f1a44409f4f94c3ac643490f0fdfeda22b3b99a9a88c0d3158834937e1b23b15\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f1a44409f4f94c3ac643490f0fdfeda22b3b99a9a88c0d3158834937e1b23b15\": not found"
	Jan 15 11:39:35 addons-391328 containerd[780]: time="2024-01-15T11:39:35.638163731Z" level=error msg="ContainerStatus for \"eed08fdf92d05ef5aafc50697645610b37369c0a528121d6bbdd988e5a353dfe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eed08fdf92d05ef5aafc50697645610b37369c0a528121d6bbdd988e5a353dfe\": not found"
	Jan 15 11:39:35 addons-391328 containerd[780]: time="2024-01-15T11:39:35.638680109Z" level=error msg="ContainerStatus for \"190b4a7cbf4f336c37267f5f57dc672ede0fabcc1848e02927278f06633fa28c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"190b4a7cbf4f336c37267f5f57dc672ede0fabcc1848e02927278f06633fa28c\": not found"
	Jan 15 11:39:35 addons-391328 containerd[780]: time="2024-01-15T11:39:35.639177791Z" level=error msg="ContainerStatus for \"e33c150457c2486fa69f1fb6f24c3868b63f50d568eff18a090aa94e39376af5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e33c150457c2486fa69f1fb6f24c3868b63f50d568eff18a090aa94e39376af5\": not found"
	Jan 15 11:39:35 addons-391328 containerd[780]: time="2024-01-15T11:39:35.639665429Z" level=error msg="ContainerStatus for \"f6885732e64e83db8c71d78505a2db642e99d28eab1f570e0b36b78da60f3820\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f6885732e64e83db8c71d78505a2db642e99d28eab1f570e0b36b78da60f3820\": not found"
	Jan 15 11:39:35 addons-391328 containerd[780]: time="2024-01-15T11:39:35.640098292Z" level=error msg="ContainerStatus for \"3fadc2bb0c28a90500a3fcf9220b592870b43b715e06b4d4546bddc599be6d39\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3fadc2bb0c28a90500a3fcf9220b592870b43b715e06b4d4546bddc599be6d39\": not found"
	Jan 15 11:39:35 addons-391328 containerd[780]: time="2024-01-15T11:39:35.640569861Z" level=error msg="ContainerStatus for \"f1a44409f4f94c3ac643490f0fdfeda22b3b99a9a88c0d3158834937e1b23b15\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f1a44409f4f94c3ac643490f0fdfeda22b3b99a9a88c0d3158834937e1b23b15\": not found"
	Jan 15 11:39:35 addons-391328 containerd[780]: time="2024-01-15T11:39:35.641040293Z" level=error msg="ContainerStatus for \"eed08fdf92d05ef5aafc50697645610b37369c0a528121d6bbdd988e5a353dfe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eed08fdf92d05ef5aafc50697645610b37369c0a528121d6bbdd988e5a353dfe\": not found"
	Jan 15 11:39:35 addons-391328 containerd[780]: time="2024-01-15T11:39:35.641484350Z" level=error msg="ContainerStatus for \"190b4a7cbf4f336c37267f5f57dc672ede0fabcc1848e02927278f06633fa28c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"190b4a7cbf4f336c37267f5f57dc672ede0fabcc1848e02927278f06633fa28c\": not found"
	Jan 15 11:39:35 addons-391328 containerd[780]: time="2024-01-15T11:39:35.641887590Z" level=error msg="ContainerStatus for \"e33c150457c2486fa69f1fb6f24c3868b63f50d568eff18a090aa94e39376af5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e33c150457c2486fa69f1fb6f24c3868b63f50d568eff18a090aa94e39376af5\": not found"
	Jan 15 11:39:35 addons-391328 containerd[780]: time="2024-01-15T11:39:35.642300446Z" level=error msg="ContainerStatus for \"f6885732e64e83db8c71d78505a2db642e99d28eab1f570e0b36b78da60f3820\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f6885732e64e83db8c71d78505a2db642e99d28eab1f570e0b36b78da60f3820\": not found"
	Jan 15 11:39:35 addons-391328 containerd[780]: time="2024-01-15T11:39:35.642691571Z" level=error msg="ContainerStatus for \"3fadc2bb0c28a90500a3fcf9220b592870b43b715e06b4d4546bddc599be6d39\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3fadc2bb0c28a90500a3fcf9220b592870b43b715e06b4d4546bddc599be6d39\": not found"
	Jan 15 11:39:35 addons-391328 containerd[780]: time="2024-01-15T11:39:35.643099665Z" level=error msg="ContainerStatus for \"f1a44409f4f94c3ac643490f0fdfeda22b3b99a9a88c0d3158834937e1b23b15\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f1a44409f4f94c3ac643490f0fdfeda22b3b99a9a88c0d3158834937e1b23b15\": not found"
	Jan 15 11:39:35 addons-391328 containerd[780]: time="2024-01-15T11:39:35.643505451Z" level=error msg="ContainerStatus for \"eed08fdf92d05ef5aafc50697645610b37369c0a528121d6bbdd988e5a353dfe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eed08fdf92d05ef5aafc50697645610b37369c0a528121d6bbdd988e5a353dfe\": not found"
	Jan 15 11:39:35 addons-391328 containerd[780]: time="2024-01-15T11:39:35.643859707Z" level=error msg="ContainerStatus for \"190b4a7cbf4f336c37267f5f57dc672ede0fabcc1848e02927278f06633fa28c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"190b4a7cbf4f336c37267f5f57dc672ede0fabcc1848e02927278f06633fa28c\": not found"
	Jan 15 11:39:35 addons-391328 containerd[780]: time="2024-01-15T11:39:35.644257816Z" level=error msg="ContainerStatus for \"e33c150457c2486fa69f1fb6f24c3868b63f50d568eff18a090aa94e39376af5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e33c150457c2486fa69f1fb6f24c3868b63f50d568eff18a090aa94e39376af5\": not found"
	Jan 15 11:39:35 addons-391328 containerd[780]: time="2024-01-15T11:39:35.644653396Z" level=error msg="ContainerStatus for \"f6885732e64e83db8c71d78505a2db642e99d28eab1f570e0b36b78da60f3820\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f6885732e64e83db8c71d78505a2db642e99d28eab1f570e0b36b78da60f3820\": not found"
	Jan 15 11:39:35 addons-391328 containerd[780]: time="2024-01-15T11:39:35.645052444Z" level=error msg="ContainerStatus for \"3fadc2bb0c28a90500a3fcf9220b592870b43b715e06b4d4546bddc599be6d39\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3fadc2bb0c28a90500a3fcf9220b592870b43b715e06b4d4546bddc599be6d39\": not found"
	Jan 15 11:39:35 addons-391328 containerd[780]: time="2024-01-15T11:39:35.645400330Z" level=error msg="ContainerStatus for \"f1a44409f4f94c3ac643490f0fdfeda22b3b99a9a88c0d3158834937e1b23b15\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f1a44409f4f94c3ac643490f0fdfeda22b3b99a9a88c0d3158834937e1b23b15\": not found"
	Jan 15 11:39:35 addons-391328 containerd[780]: time="2024-01-15T11:39:35.645871316Z" level=error msg="ContainerStatus for \"eed08fdf92d05ef5aafc50697645610b37369c0a528121d6bbdd988e5a353dfe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eed08fdf92d05ef5aafc50697645610b37369c0a528121d6bbdd988e5a353dfe\": not found"
	Jan 15 11:39:35 addons-391328 containerd[780]: time="2024-01-15T11:39:35.646314940Z" level=error msg="ContainerStatus for \"190b4a7cbf4f336c37267f5f57dc672ede0fabcc1848e02927278f06633fa28c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"190b4a7cbf4f336c37267f5f57dc672ede0fabcc1848e02927278f06633fa28c\": not found"
	Jan 15 11:39:35 addons-391328 containerd[780]: time="2024-01-15T11:39:35.646695764Z" level=error msg="ContainerStatus for \"e33c150457c2486fa69f1fb6f24c3868b63f50d568eff18a090aa94e39376af5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e33c150457c2486fa69f1fb6f24c3868b63f50d568eff18a090aa94e39376af5\": not found"
	Jan 15 11:39:35 addons-391328 containerd[780]: time="2024-01-15T11:39:35.647114101Z" level=error msg="ContainerStatus for \"f6885732e64e83db8c71d78505a2db642e99d28eab1f570e0b36b78da60f3820\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f6885732e64e83db8c71d78505a2db642e99d28eab1f570e0b36b78da60f3820\": not found"
	Jan 15 11:39:35 addons-391328 containerd[780]: time="2024-01-15T11:39:35.647529305Z" level=error msg="ContainerStatus for \"3fadc2bb0c28a90500a3fcf9220b592870b43b715e06b4d4546bddc599be6d39\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3fadc2bb0c28a90500a3fcf9220b592870b43b715e06b4d4546bddc599be6d39\": not found"
	Jan 15 11:39:35 addons-391328 containerd[780]: time="2024-01-15T11:39:35.647958831Z" level=error msg="ContainerStatus for \"f1a44409f4f94c3ac643490f0fdfeda22b3b99a9a88c0d3158834937e1b23b15\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f1a44409f4f94c3ac643490f0fdfeda22b3b99a9a88c0d3158834937e1b23b15\": not found"
	
	
	==> coredns [2862e7247c61c477bbe65b2e4ea9ef24a074e80dcafbf3a51f7673212f6b4028] <==
	[INFO] 10.244.0.15:48882 - 28913 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000075647s
	[INFO] 10.244.0.15:45992 - 9422 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.003497902s
	[INFO] 10.244.0.15:45992 - 8899 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.003627769s
	[INFO] 10.244.0.15:58923 - 21696 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.00428261s
	[INFO] 10.244.0.15:58923 - 59452 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004752757s
	[INFO] 10.244.0.15:57910 - 13759 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.00454604s
	[INFO] 10.244.0.15:57910 - 21411 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.005264724s
	[INFO] 10.244.0.15:42382 - 49088 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000089031s
	[INFO] 10.244.0.15:42382 - 20478 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000142817s
	[INFO] 10.244.0.17:60940 - 60433 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000185069s
	[INFO] 10.244.0.17:59477 - 14397 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000222656s
	[INFO] 10.244.0.17:41300 - 48614 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000116285s
	[INFO] 10.244.0.17:37170 - 35188 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000124243s
	[INFO] 10.244.0.17:44331 - 51017 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000154152s
	[INFO] 10.244.0.17:37573 - 53109 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000169829s
	[INFO] 10.244.0.17:60341 - 21528 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.004326463s
	[INFO] 10.244.0.17:50799 - 46007 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.004489442s
	[INFO] 10.244.0.17:37338 - 46451 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.006449136s
	[INFO] 10.244.0.17:42373 - 12360 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.006555127s
	[INFO] 10.244.0.17:52630 - 14598 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006169761s
	[INFO] 10.244.0.17:60788 - 11771 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006425964s
	[INFO] 10.244.0.17:33986 - 53741 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000737615s
	[INFO] 10.244.0.17:59041 - 62185 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.00075387s
	[INFO] 10.244.0.23:43320 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000225539s
	[INFO] 10.244.0.23:40238 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00015158s
	
	
	==> describe nodes <==
	Name:               addons-391328
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-391328
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=71cf7d00913f789829bf5813c1d11b9a83eda53e
	                    minikube.k8s.io/name=addons-391328
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_15T11_37_32_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-391328
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Jan 2024 11:37:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-391328
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 15 Jan 2024 11:39:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Jan 2024 11:39:33 +0000   Mon, 15 Jan 2024 11:37:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Jan 2024 11:39:33 +0000   Mon, 15 Jan 2024 11:37:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Jan 2024 11:39:33 +0000   Mon, 15 Jan 2024 11:37:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Jan 2024 11:39:33 +0000   Mon, 15 Jan 2024 11:37:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-391328
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	System Info:
	  Machine ID:                 7ff8c1dd5e104391a2f4d3fcfa46e925
	  System UUID:                cf245ce5-2622-46e0-bb03-b00a0ac147bc
	  Boot ID:                    aee63d4d-8bd2-4dcd-a870-80390bdc439a
	  Kernel Version:             5.15.0-1048-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.26
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-zwhzv           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17s
	  gcp-auth                    gcp-auth-d4c87556c-lp7j6                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	  kube-system                 coredns-5dd5756b68-z8g2n                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     111s
	  kube-system                 etcd-addons-391328                         100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         2m4s
	  kube-system                 kindnet-jf42m                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      111s
	  kube-system                 kube-apiserver-addons-391328               250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m4s
	  kube-system                 kube-controller-manager-addons-391328      200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m4s
	  kube-system                 kube-proxy-9tqds                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         111s
	  kube-system                 kube-scheduler-addons-391328               100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m4s
	  kube-system                 snapshot-controller-58dbcc7b99-52ngx       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  kube-system                 snapshot-controller-58dbcc7b99-6p45q       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         107s
	  local-path-storage          local-path-provisioner-78b46b4d5c-ghr5p    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         105s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-rvrlr             0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (0%!)(MISSING)       256Mi (0%!)(MISSING)     105s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             348Mi (1%!)(MISSING)  476Mi (1%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 108s  kube-proxy       
	  Normal  Starting                 2m4s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m4s  kubelet          Node addons-391328 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m4s  kubelet          Node addons-391328 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m4s  kubelet          Node addons-391328 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             2m4s  kubelet          Node addons-391328 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  2m4s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m4s  kubelet          Node addons-391328 status is now: NodeReady
	  Normal  RegisteredNode           112s  node-controller  Node addons-391328 event: Registered Node addons-391328 in Controller
	
	
	==> dmesg <==
	
	
	==> etcd [885ccee66b992944d9ea4f29cf585b0c093362873cf48e1b783ad88a8e6a5dcd] <==
	{"level":"info","ts":"2024-01-15T11:37:26.458702Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-15T11:37:26.458578Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-15T11:37:26.464555Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-01-15T11:37:26.464769Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-01-15T11:37:26.845681Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-01-15T11:37:26.845738Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-01-15T11:37:26.845755Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-01-15T11:37:26.845787Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-01-15T11:37:26.845797Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-01-15T11:37:26.845805Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-01-15T11:37:26.845816Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-01-15T11:37:26.847035Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-391328 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-15T11:37:26.847066Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-15T11:37:26.847127Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-15T11:37:26.847215Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-15T11:37:26.847955Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-15T11:37:26.848015Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-15T11:37:26.848072Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-15T11:37:26.848176Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-15T11:37:26.848202Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-15T11:37:26.848337Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-01-15T11:37:26.84838Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-15T11:37:46.853977Z","caller":"traceutil/trace.go:171","msg":"trace[1308906984] transaction","detail":"{read_only:false; response_revision:387; number_of_response:1; }","duration":"107.461648ms","start":"2024-01-15T11:37:46.746486Z","end":"2024-01-15T11:37:46.853948Z","steps":["trace[1308906984] 'process raft request'  (duration: 97.410357ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-15T11:37:46.85437Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.557342ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-5dd5756b68-qjksc\" ","response":"range_response_count:1 size:4465"}
	{"level":"info","ts":"2024-01-15T11:37:46.854449Z","caller":"traceutil/trace.go:171","msg":"trace[405026223] range","detail":"{range_begin:/registry/pods/kube-system/coredns-5dd5756b68-qjksc; range_end:; response_count:1; response_revision:386; }","duration":"109.666391ms","start":"2024-01-15T11:37:46.744761Z","end":"2024-01-15T11:37:46.854427Z","steps":["trace[405026223] 'range keys from in-memory index tree'  (duration: 98.767889ms)"],"step_count":1}
	
	
	==> gcp-auth [16abe334e05d4b6488296a33f3f77b13c86cf38a980e757c1c62074648ba19c4] <==
	2024/01/15 11:38:36 GCP Auth Webhook started!
	2024/01/15 11:39:06 Ready to marshal response ...
	2024/01/15 11:39:06 Ready to write response ...
	2024/01/15 11:39:10 Ready to marshal response ...
	2024/01/15 11:39:10 Ready to write response ...
	2024/01/15 11:39:11 Ready to marshal response ...
	2024/01/15 11:39:11 Ready to write response ...
	2024/01/15 11:39:14 Ready to marshal response ...
	2024/01/15 11:39:14 Ready to write response ...
	2024/01/15 11:39:14 Ready to marshal response ...
	2024/01/15 11:39:14 Ready to write response ...
	2024/01/15 11:39:18 Ready to marshal response ...
	2024/01/15 11:39:18 Ready to write response ...
	2024/01/15 11:39:24 Ready to marshal response ...
	2024/01/15 11:39:24 Ready to write response ...
	2024/01/15 11:39:25 Ready to marshal response ...
	2024/01/15 11:39:25 Ready to write response ...
	2024/01/15 11:39:28 Ready to marshal response ...
	2024/01/15 11:39:28 Ready to write response ...
	
	
	==> kernel <==
	 11:39:35 up  2:22,  0 users,  load average: 2.36, 2.06, 2.02
	Linux addons-391328 5.15.0-1048-gcp #56~20.04.1-Ubuntu SMP Fri Nov 24 16:52:37 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [526de9ff2cdbcb796430e504215045647ef594c29269fa50e4a0999ee8c52123] <==
	I0115 11:37:46.044166       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0115 11:37:46.044240       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0115 11:37:46.044366       1 main.go:116] setting mtu 1500 for CNI 
	I0115 11:37:46.044444       1 main.go:146] kindnetd IP family: "ipv4"
	I0115 11:37:46.044470       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0115 11:38:16.373861       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0115 11:38:16.382893       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 11:38:16.382926       1 main.go:227] handling current node
	I0115 11:38:26.395831       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 11:38:26.395863       1 main.go:227] handling current node
	I0115 11:38:36.408076       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 11:38:36.408101       1 main.go:227] handling current node
	I0115 11:38:46.411760       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 11:38:46.411787       1 main.go:227] handling current node
	I0115 11:38:56.419320       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 11:38:56.419346       1 main.go:227] handling current node
	I0115 11:39:06.423585       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 11:39:06.423608       1 main.go:227] handling current node
	I0115 11:39:16.435913       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 11:39:16.435941       1 main.go:227] handling current node
	I0115 11:39:26.439717       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 11:39:26.439742       1 main.go:227] handling current node
	
	
	==> kube-apiserver [ac5a448b73cd722ce5270ef745c361955f6e3a40df5042731527b7a446ecee39] <==
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0115 11:37:55.748223       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.99.239.235"}
	E0115 11:38:02.091361       1 writers.go:116] apiserver was unable to close cleanly the response writer: client disconnected
	E0115 11:38:02.091468       1 wrap.go:54] timeout or abort while handling: method=GET URI="/api/v1/pods?fieldSelector=spec.nodeName%!D(MISSING)addons-391328&limit=500&resourceVersion=0" audit-ID="2b4d2fc6-79b1-4c20-a82e-498651d5a57b"
	E0115 11:38:02.091511       1 timeout.go:142] post-timeout activity - time-elapsed: 7.485µs, GET "/api/v1/pods" result: <nil>
	I0115 11:38:28.237070       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0115 11:38:39.242189       1 handler_proxy.go:93] no RequestInfo found in the context
	E0115 11:38:39.242236       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.42.174:443/apis/metrics.k8s.io/v1beta1: Get "https://10.109.42.174:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.109.42.174:443: connect: connection refused
	E0115 11:38:39.242255       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0115 11:38:39.242722       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.42.174:443/apis/metrics.k8s.io/v1beta1: Get "https://10.109.42.174:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.109.42.174:443: connect: connection refused
	I0115 11:38:39.272454       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0115 11:38:39.341178       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0115 11:39:08.869049       1 upgradeaware.go:425] Error proxying data from client to backend: read tcp 192.168.49.2:8443->10.244.0.22:46252: read: connection reset by peer
	I0115 11:39:16.478961       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0115 11:39:16.485513       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0115 11:39:16.968418       1 dispatcher.go:217] Failed calling webhook, failing closed validate.nginx.ingress.kubernetes.io: failed calling webhook "validate.nginx.ingress.kubernetes.io": failed to call webhook: Post "https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1/ingresses?timeout=10s": dial tcp 10.109.201.235:443: connect: connection refused
	W0115 11:39:17.495191       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0115 11:39:18.089277       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0115 11:39:18.256683       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.99.86.206"}
	I0115 11:39:22.251408       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0115 11:39:25.504870       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0115 11:39:25.507373       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0115 11:39:25.510035       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0115 11:39:28.832077       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.107.167.97"}
	
	
	==> kube-controller-manager [1b78d94c40a3d08f597620f25b2706c3c5ad2db2b11d1e2bf4ce92c09b23ddeb] <==
	W0115 11:39:20.906153       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0115 11:39:20.906194       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0115 11:39:24.732896       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0115 11:39:24.732930       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0115 11:39:25.105000       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-78b46b4d5c" duration="11.207µs"
	I0115 11:39:25.211092       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0115 11:39:26.720671       1 namespace_controller.go:182] "Namespace has been deleted" namespace="gadget"
	W0115 11:39:27.027139       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0115 11:39:27.027171       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0115 11:39:28.670551       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0115 11:39:28.679737       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-zwhzv"
	I0115 11:39:28.686256       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="15.839871ms"
	I0115 11:39:28.691507       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="5.197969ms"
	I0115 11:39:28.691615       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="69.98µs"
	I0115 11:39:28.697626       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="42.629µs"
	I0115 11:39:30.471043       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="9.513143ms"
	I0115 11:39:30.471141       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="53.599µs"
	I0115 11:39:31.028747       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0115 11:39:31.030309       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="4.092µs"
	I0115 11:39:31.038763       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0115 11:39:33.774250       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/cloud-spanner-emulator-64c8c85f65" duration="8.124µs"
	I0115 11:39:34.456822       1 stateful_set.go:458] "StatefulSet has been deleted" key="kube-system/csi-hostpath-attacher"
	I0115 11:39:34.549436       1 stateful_set.go:458] "StatefulSet has been deleted" key="kube-system/csi-hostpath-resizer"
	W0115 11:39:34.869846       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0115 11:39:34.869886       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [295e88588854d8840001ba1e7e81454f885b7b2b78cefaa58259d88b1e9e7d76] <==
	I0115 11:37:46.350739       1 server_others.go:69] "Using iptables proxy"
	I0115 11:37:46.541504       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0115 11:37:46.856805       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0115 11:37:46.949961       1 server_others.go:152] "Using iptables Proxier"
	I0115 11:37:46.950020       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0115 11:37:46.950030       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0115 11:37:46.950066       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0115 11:37:46.950345       1 server.go:846] "Version info" version="v1.28.4"
	I0115 11:37:46.950365       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0115 11:37:46.953572       1 config.go:188] "Starting service config controller"
	I0115 11:37:46.953591       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0115 11:37:46.953595       1 config.go:315] "Starting node config controller"
	I0115 11:37:46.953607       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0115 11:37:46.953612       1 config.go:97] "Starting endpoint slice config controller"
	I0115 11:37:46.953617       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0115 11:37:47.057225       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0115 11:37:47.057294       1 shared_informer.go:318] Caches are synced for service config
	I0115 11:37:47.060227       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [0818e33a39ae7d33bbaacb8b6fa299258ad68586e3c0afffed6b84391670d256] <==
	E0115 11:37:28.455962       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0115 11:37:28.455942       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0115 11:37:28.455948       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0115 11:37:28.456039       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0115 11:37:28.456068       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0115 11:37:28.456044       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0115 11:37:28.456100       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0115 11:37:28.456128       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0115 11:37:28.456138       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0115 11:37:28.456170       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0115 11:37:28.456232       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0115 11:37:28.456270       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0115 11:37:28.456609       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0115 11:37:28.456635       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0115 11:37:28.456724       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0115 11:37:28.456936       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0115 11:37:29.260311       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0115 11:37:29.260342       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0115 11:37:29.281765       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0115 11:37:29.281815       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0115 11:37:29.289166       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0115 11:37:29.289193       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0115 11:37:29.431839       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0115 11:37:29.431877       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0115 11:37:29.748547       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 15 11:39:35 addons-391328 kubelet[1510]: I0115 11:39:35.643291    1510 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f1a44409f4f94c3ac643490f0fdfeda22b3b99a9a88c0d3158834937e1b23b15"} err="failed to get container status \"f1a44409f4f94c3ac643490f0fdfeda22b3b99a9a88c0d3158834937e1b23b15\": rpc error: code = NotFound desc = an error occurred when try to find container \"f1a44409f4f94c3ac643490f0fdfeda22b3b99a9a88c0d3158834937e1b23b15\": not found"
	Jan 15 11:39:35 addons-391328 kubelet[1510]: I0115 11:39:35.643318    1510 scope.go:117] "RemoveContainer" containerID="eed08fdf92d05ef5aafc50697645610b37369c0a528121d6bbdd988e5a353dfe"
	Jan 15 11:39:35 addons-391328 kubelet[1510]: I0115 11:39:35.643662    1510 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eed08fdf92d05ef5aafc50697645610b37369c0a528121d6bbdd988e5a353dfe"} err="failed to get container status \"eed08fdf92d05ef5aafc50697645610b37369c0a528121d6bbdd988e5a353dfe\": rpc error: code = NotFound desc = an error occurred when try to find container \"eed08fdf92d05ef5aafc50697645610b37369c0a528121d6bbdd988e5a353dfe\": not found"
	Jan 15 11:39:35 addons-391328 kubelet[1510]: I0115 11:39:35.643685    1510 scope.go:117] "RemoveContainer" containerID="190b4a7cbf4f336c37267f5f57dc672ede0fabcc1848e02927278f06633fa28c"
	Jan 15 11:39:35 addons-391328 kubelet[1510]: I0115 11:39:35.644005    1510 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"190b4a7cbf4f336c37267f5f57dc672ede0fabcc1848e02927278f06633fa28c"} err="failed to get container status \"190b4a7cbf4f336c37267f5f57dc672ede0fabcc1848e02927278f06633fa28c\": rpc error: code = NotFound desc = an error occurred when try to find container \"190b4a7cbf4f336c37267f5f57dc672ede0fabcc1848e02927278f06633fa28c\": not found"
	Jan 15 11:39:35 addons-391328 kubelet[1510]: I0115 11:39:35.644026    1510 scope.go:117] "RemoveContainer" containerID="e33c150457c2486fa69f1fb6f24c3868b63f50d568eff18a090aa94e39376af5"
	Jan 15 11:39:35 addons-391328 kubelet[1510]: I0115 11:39:35.644444    1510 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e33c150457c2486fa69f1fb6f24c3868b63f50d568eff18a090aa94e39376af5"} err="failed to get container status \"e33c150457c2486fa69f1fb6f24c3868b63f50d568eff18a090aa94e39376af5\": rpc error: code = NotFound desc = an error occurred when try to find container \"e33c150457c2486fa69f1fb6f24c3868b63f50d568eff18a090aa94e39376af5\": not found"
	Jan 15 11:39:35 addons-391328 kubelet[1510]: I0115 11:39:35.644471    1510 scope.go:117] "RemoveContainer" containerID="f6885732e64e83db8c71d78505a2db642e99d28eab1f570e0b36b78da60f3820"
	Jan 15 11:39:35 addons-391328 kubelet[1510]: I0115 11:39:35.644845    1510 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f6885732e64e83db8c71d78505a2db642e99d28eab1f570e0b36b78da60f3820"} err="failed to get container status \"f6885732e64e83db8c71d78505a2db642e99d28eab1f570e0b36b78da60f3820\": rpc error: code = NotFound desc = an error occurred when try to find container \"f6885732e64e83db8c71d78505a2db642e99d28eab1f570e0b36b78da60f3820\": not found"
	Jan 15 11:39:35 addons-391328 kubelet[1510]: I0115 11:39:35.644878    1510 scope.go:117] "RemoveContainer" containerID="3fadc2bb0c28a90500a3fcf9220b592870b43b715e06b4d4546bddc599be6d39"
	Jan 15 11:39:35 addons-391328 kubelet[1510]: I0115 11:39:35.645211    1510 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3fadc2bb0c28a90500a3fcf9220b592870b43b715e06b4d4546bddc599be6d39"} err="failed to get container status \"3fadc2bb0c28a90500a3fcf9220b592870b43b715e06b4d4546bddc599be6d39\": rpc error: code = NotFound desc = an error occurred when try to find container \"3fadc2bb0c28a90500a3fcf9220b592870b43b715e06b4d4546bddc599be6d39\": not found"
	Jan 15 11:39:35 addons-391328 kubelet[1510]: I0115 11:39:35.645232    1510 scope.go:117] "RemoveContainer" containerID="f1a44409f4f94c3ac643490f0fdfeda22b3b99a9a88c0d3158834937e1b23b15"
	Jan 15 11:39:35 addons-391328 kubelet[1510]: I0115 11:39:35.645579    1510 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f1a44409f4f94c3ac643490f0fdfeda22b3b99a9a88c0d3158834937e1b23b15"} err="failed to get container status \"f1a44409f4f94c3ac643490f0fdfeda22b3b99a9a88c0d3158834937e1b23b15\": rpc error: code = NotFound desc = an error occurred when try to find container \"f1a44409f4f94c3ac643490f0fdfeda22b3b99a9a88c0d3158834937e1b23b15\": not found"
	Jan 15 11:39:35 addons-391328 kubelet[1510]: I0115 11:39:35.645608    1510 scope.go:117] "RemoveContainer" containerID="eed08fdf92d05ef5aafc50697645610b37369c0a528121d6bbdd988e5a353dfe"
	Jan 15 11:39:35 addons-391328 kubelet[1510]: I0115 11:39:35.646094    1510 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eed08fdf92d05ef5aafc50697645610b37369c0a528121d6bbdd988e5a353dfe"} err="failed to get container status \"eed08fdf92d05ef5aafc50697645610b37369c0a528121d6bbdd988e5a353dfe\": rpc error: code = NotFound desc = an error occurred when try to find container \"eed08fdf92d05ef5aafc50697645610b37369c0a528121d6bbdd988e5a353dfe\": not found"
	Jan 15 11:39:35 addons-391328 kubelet[1510]: I0115 11:39:35.646118    1510 scope.go:117] "RemoveContainer" containerID="190b4a7cbf4f336c37267f5f57dc672ede0fabcc1848e02927278f06633fa28c"
	Jan 15 11:39:35 addons-391328 kubelet[1510]: I0115 11:39:35.646474    1510 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"190b4a7cbf4f336c37267f5f57dc672ede0fabcc1848e02927278f06633fa28c"} err="failed to get container status \"190b4a7cbf4f336c37267f5f57dc672ede0fabcc1848e02927278f06633fa28c\": rpc error: code = NotFound desc = an error occurred when try to find container \"190b4a7cbf4f336c37267f5f57dc672ede0fabcc1848e02927278f06633fa28c\": not found"
	Jan 15 11:39:35 addons-391328 kubelet[1510]: I0115 11:39:35.646500    1510 scope.go:117] "RemoveContainer" containerID="e33c150457c2486fa69f1fb6f24c3868b63f50d568eff18a090aa94e39376af5"
	Jan 15 11:39:35 addons-391328 kubelet[1510]: I0115 11:39:35.646900    1510 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e33c150457c2486fa69f1fb6f24c3868b63f50d568eff18a090aa94e39376af5"} err="failed to get container status \"e33c150457c2486fa69f1fb6f24c3868b63f50d568eff18a090aa94e39376af5\": rpc error: code = NotFound desc = an error occurred when try to find container \"e33c150457c2486fa69f1fb6f24c3868b63f50d568eff18a090aa94e39376af5\": not found"
	Jan 15 11:39:35 addons-391328 kubelet[1510]: I0115 11:39:35.646921    1510 scope.go:117] "RemoveContainer" containerID="f6885732e64e83db8c71d78505a2db642e99d28eab1f570e0b36b78da60f3820"
	Jan 15 11:39:35 addons-391328 kubelet[1510]: I0115 11:39:35.647302    1510 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f6885732e64e83db8c71d78505a2db642e99d28eab1f570e0b36b78da60f3820"} err="failed to get container status \"f6885732e64e83db8c71d78505a2db642e99d28eab1f570e0b36b78da60f3820\": rpc error: code = NotFound desc = an error occurred when try to find container \"f6885732e64e83db8c71d78505a2db642e99d28eab1f570e0b36b78da60f3820\": not found"
	Jan 15 11:39:35 addons-391328 kubelet[1510]: I0115 11:39:35.647328    1510 scope.go:117] "RemoveContainer" containerID="3fadc2bb0c28a90500a3fcf9220b592870b43b715e06b4d4546bddc599be6d39"
	Jan 15 11:39:35 addons-391328 kubelet[1510]: I0115 11:39:35.647731    1510 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3fadc2bb0c28a90500a3fcf9220b592870b43b715e06b4d4546bddc599be6d39"} err="failed to get container status \"3fadc2bb0c28a90500a3fcf9220b592870b43b715e06b4d4546bddc599be6d39\": rpc error: code = NotFound desc = an error occurred when try to find container \"3fadc2bb0c28a90500a3fcf9220b592870b43b715e06b4d4546bddc599be6d39\": not found"
	Jan 15 11:39:35 addons-391328 kubelet[1510]: I0115 11:39:35.647754    1510 scope.go:117] "RemoveContainer" containerID="f1a44409f4f94c3ac643490f0fdfeda22b3b99a9a88c0d3158834937e1b23b15"
	Jan 15 11:39:35 addons-391328 kubelet[1510]: I0115 11:39:35.648137    1510 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f1a44409f4f94c3ac643490f0fdfeda22b3b99a9a88c0d3158834937e1b23b15"} err="failed to get container status \"f1a44409f4f94c3ac643490f0fdfeda22b3b99a9a88c0d3158834937e1b23b15\": rpc error: code = NotFound desc = an error occurred when try to find container \"f1a44409f4f94c3ac643490f0fdfeda22b3b99a9a88c0d3158834937e1b23b15\": not found"
	
	
	==> storage-provisioner [6848d7d748f0c8480b4927b54b1d78a7fd06347fc17e71cd9e3d7f4ad6ad4f4e] <==
	I0115 11:37:50.545733       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0115 11:37:50.560671       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0115 11:37:50.560721       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0115 11:37:50.654231       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0115 11:37:50.654439       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-391328_ffd1e383-eae0-4b65-9597-ab3b2b849980!
	I0115 11:37:50.654550       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ae2befe7-c6eb-4aa3-a76a-f19ffc5cd678", APIVersion:"v1", ResourceVersion:"569", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-391328_ffd1e383-eae0-4b65-9597-ab3b2b849980 became leader
	I0115 11:37:50.754996       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-391328_ffd1e383-eae0-4b65-9597-ab3b2b849980!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-391328 -n addons-391328
helpers_test.go:261: (dbg) Run:  kubectl --context addons-391328 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Headlamp FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Headlamp (2.92s)

                                                
                                    

Test pass (293/320)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 4.32
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.08
9 TestDownloadOnly/v1.16.0/DeleteAll 0.2
10 TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.28.4/json-events 4.43
13 TestDownloadOnly/v1.28.4/preload-exists 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.08
18 TestDownloadOnly/v1.28.4/DeleteAll 0.21
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.29.0-rc.2/json-events 4.03
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.08
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.2
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.14
29 TestDownloadOnlyKic 1.29
30 TestBinaryMirror 0.74
31 TestOffline 68.82
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 123.36
38 TestAddons/parallel/Registry 13.75
39 TestAddons/parallel/Ingress 25.67
40 TestAddons/parallel/InspektorGadget 10.83
41 TestAddons/parallel/MetricsServer 5.71
42 TestAddons/parallel/HelmTiller 10.16
44 TestAddons/parallel/CSI 40.33
46 TestAddons/parallel/CloudSpanner 5.57
47 TestAddons/parallel/LocalPath 52.98
48 TestAddons/parallel/NvidiaDevicePlugin 6.47
49 TestAddons/parallel/Yakd 6
52 TestAddons/serial/GCPAuth/Namespaces 0.11
53 TestAddons/StoppedEnableDisable 12.14
54 TestCertOptions 24.42
55 TestCertExpiration 211.7
57 TestForceSystemdFlag 26.42
58 TestForceSystemdEnv 41.87
59 TestDockerEnvContainerd 39.58
60 TestKVMDriverInstallOrUpdate 3.27
64 TestErrorSpam/setup 23.01
65 TestErrorSpam/start 0.63
66 TestErrorSpam/status 0.92
67 TestErrorSpam/pause 1.53
68 TestErrorSpam/unpause 1.54
69 TestErrorSpam/stop 1.4
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 52.52
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 4.87
76 TestFunctional/serial/KubeContext 0.05
77 TestFunctional/serial/KubectlGetPods 0.07
80 TestFunctional/serial/CacheCmd/cache/add_remote 3.13
81 TestFunctional/serial/CacheCmd/cache/add_local 1.43
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
83 TestFunctional/serial/CacheCmd/cache/list 0.07
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
85 TestFunctional/serial/CacheCmd/cache/cache_reload 1.85
86 TestFunctional/serial/CacheCmd/cache/delete 0.13
87 TestFunctional/serial/MinikubeKubectlCmd 0.12
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
89 TestFunctional/serial/ExtraConfig 41.15
90 TestFunctional/serial/ComponentHealth 0.07
91 TestFunctional/serial/LogsCmd 1.37
92 TestFunctional/serial/LogsFileCmd 1.38
93 TestFunctional/serial/InvalidService 3.75
95 TestFunctional/parallel/ConfigCmd 0.56
96 TestFunctional/parallel/DashboardCmd 13.22
97 TestFunctional/parallel/DryRun 0.55
98 TestFunctional/parallel/InternationalLanguage 0.17
99 TestFunctional/parallel/StatusCmd 1.02
103 TestFunctional/parallel/ServiceCmdConnect 10.56
104 TestFunctional/parallel/AddonsCmd 0.15
105 TestFunctional/parallel/PersistentVolumeClaim 33.65
107 TestFunctional/parallel/SSHCmd 0.61
108 TestFunctional/parallel/CpCmd 1.99
109 TestFunctional/parallel/MySQL 20.11
110 TestFunctional/parallel/FileSync 0.34
111 TestFunctional/parallel/CertSync 1.81
115 TestFunctional/parallel/NodeLabels 0.07
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.73
119 TestFunctional/parallel/License 0.18
121 TestFunctional/parallel/Version/short 0.07
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.5
123 TestFunctional/parallel/Version/components 0.99
124 TestFunctional/parallel/ImageCommands/ImageListShort 0.31
125 TestFunctional/parallel/ImageCommands/ImageListTable 0.32
126 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
127 TestFunctional/parallel/ImageCommands/ImageListYaml 0.35
128 TestFunctional/parallel/ImageCommands/ImageBuild 3.53
129 TestFunctional/parallel/ImageCommands/Setup 1
130 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
132 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.3
133 TestFunctional/parallel/ProfileCmd/profile_not_create 0.49
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.63
135 TestFunctional/parallel/ProfileCmd/profile_list 0.43
136 TestFunctional/parallel/ProfileCmd/profile_json_output 0.39
137 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.39
138 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
139 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
144 TestFunctional/parallel/ServiceCmd/DeployApp 7.23
145 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 4.7
146 TestFunctional/parallel/MountCmd/any-port 6.95
147 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.76
148 TestFunctional/parallel/ImageCommands/ImageRemove 0.5
149 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.52
150 TestFunctional/parallel/ServiceCmd/List 0.52
151 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.87
152 TestFunctional/parallel/ServiceCmd/JSONOutput 0.55
153 TestFunctional/parallel/ServiceCmd/HTTPS 0.36
154 TestFunctional/parallel/ServiceCmd/Format 0.37
155 TestFunctional/parallel/ServiceCmd/URL 0.41
156 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
157 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
158 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
159 TestFunctional/parallel/MountCmd/specific-port 2.02
160 TestFunctional/parallel/MountCmd/VerifyCleanup 2.05
161 TestFunctional/delete_addon-resizer_images 0.07
162 TestFunctional/delete_my-image_image 0.02
163 TestFunctional/delete_minikube_cached_images 0.01
167 TestIngressAddonLegacy/StartLegacyK8sCluster 68.01
169 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 8.82
170 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.6
171 TestIngressAddonLegacy/serial/ValidateIngressAddons 29.56
174 TestJSONOutput/start/Command 53.27
175 TestJSONOutput/start/Audit 0
177 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
180 TestJSONOutput/pause/Command 0.68
181 TestJSONOutput/pause/Audit 0
183 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/unpause/Command 0.59
187 TestJSONOutput/unpause/Audit 0
189 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/stop/Command 5.62
193 TestJSONOutput/stop/Audit 0
195 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
197 TestErrorJSONOutput 0.23
199 TestKicCustomNetwork/create_custom_network 28.28
200 TestKicCustomNetwork/use_default_bridge_network 23.91
201 TestKicExistingNetwork 24.21
202 TestKicCustomSubnet 27.28
203 TestKicStaticIP 24.78
204 TestMainNoArgs 0.06
205 TestMinikubeProfile 49.14
208 TestMountStart/serial/StartWithMountFirst 7.65
209 TestMountStart/serial/VerifyMountFirst 0.26
210 TestMountStart/serial/StartWithMountSecond 4.79
211 TestMountStart/serial/VerifyMountSecond 0.26
212 TestMountStart/serial/DeleteFirst 1.59
213 TestMountStart/serial/VerifyMountPostDelete 0.27
214 TestMountStart/serial/Stop 1.18
215 TestMountStart/serial/RestartStopped 6.66
216 TestMountStart/serial/VerifyMountPostStop 0.26
219 TestMultiNode/serial/FreshStart2Nodes 70.8
220 TestMultiNode/serial/DeployApp2Nodes 8.31
221 TestMultiNode/serial/PingHostFrom2Pods 0.78
222 TestMultiNode/serial/AddNode 20.15
223 TestMultiNode/serial/MultiNodeLabels 0.06
224 TestMultiNode/serial/ProfileList 0.29
225 TestMultiNode/serial/CopyFile 9.47
226 TestMultiNode/serial/StopNode 2.15
227 TestMultiNode/serial/StartAfterStop 11.14
228 TestMultiNode/serial/RestartKeepsNodes 111.5
229 TestMultiNode/serial/DeleteNode 4.68
230 TestMultiNode/serial/StopMultiNode 23.79
231 TestMultiNode/serial/RestartMultiNode 78.05
232 TestMultiNode/serial/ValidateNameConflict 25.51
237 TestPreload 143.53
239 TestScheduledStopUnix 97.93
242 TestInsufficientStorage 9.98
243 TestRunningBinaryUpgrade 57.76
245 TestKubernetesUpgrade 344.46
246 TestMissingContainerUpgrade 136.22
248 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
249 TestNoKubernetes/serial/StartWithK8s 39.1
250 TestNoKubernetes/serial/StartWithStopK8s 15.41
258 TestNoKubernetes/serial/Start 7.94
259 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
260 TestNoKubernetes/serial/ProfileList 1.21
261 TestNoKubernetes/serial/Stop 1.21
262 TestNoKubernetes/serial/StartNoArgs 6.26
263 TestStoppedBinaryUpgrade/Setup 0.36
264 TestStoppedBinaryUpgrade/Upgrade 91.43
265 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.3
267 TestPause/serial/Start 49.86
268 TestStoppedBinaryUpgrade/MinikubeLogs 0.94
269 TestPause/serial/SecondStartNoReconfiguration 5.49
273 TestPause/serial/Pause 0.72
274 TestPause/serial/VerifyStatus 0.35
275 TestPause/serial/Unpause 0.7
276 TestPause/serial/PauseAgain 0.83
281 TestNetworkPlugins/group/false 3.99
282 TestPause/serial/DeletePaused 2.69
283 TestPause/serial/VerifyDeletedResources 0.51
288 TestStartStop/group/old-k8s-version/serial/FirstStart 114.8
290 TestStartStop/group/embed-certs/serial/FirstStart 49.98
291 TestStartStop/group/embed-certs/serial/DeployApp 8.24
292 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.91
293 TestStartStop/group/embed-certs/serial/Stop 11.9
294 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
295 TestStartStop/group/embed-certs/serial/SecondStart 590.84
296 TestStartStop/group/old-k8s-version/serial/DeployApp 7.36
297 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.83
298 TestStartStop/group/old-k8s-version/serial/Stop 11.87
299 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
300 TestStartStop/group/old-k8s-version/serial/SecondStart 94.51
302 TestStartStop/group/no-preload/serial/FirstStart 57.8
304 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 73.27
305 TestStartStop/group/no-preload/serial/DeployApp 8.26
306 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
307 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.07
308 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.92
309 TestStartStop/group/no-preload/serial/Stop 14.2
310 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
311 TestStartStop/group/old-k8s-version/serial/Pause 2.73
313 TestStartStop/group/newest-cni/serial/FirstStart 36.55
314 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
315 TestStartStop/group/no-preload/serial/SecondStart 329.85
316 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 6.27
317 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.03
318 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.86
319 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
320 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 331.95
321 TestStartStop/group/newest-cni/serial/DeployApp 0
322 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.92
323 TestStartStop/group/newest-cni/serial/Stop 1.2
324 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
325 TestStartStop/group/newest-cni/serial/SecondStart 25.59
326 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
327 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
328 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
329 TestStartStop/group/newest-cni/serial/Pause 2.66
330 TestNetworkPlugins/group/auto/Start 48.51
331 TestNetworkPlugins/group/auto/KubeletFlags 0.28
332 TestNetworkPlugins/group/auto/NetCatPod 8.19
333 TestNetworkPlugins/group/auto/DNS 0.14
334 TestNetworkPlugins/group/auto/Localhost 0.12
335 TestNetworkPlugins/group/auto/HairPin 0.12
336 TestNetworkPlugins/group/kindnet/Start 48.53
337 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
338 TestNetworkPlugins/group/kindnet/KubeletFlags 0.28
339 TestNetworkPlugins/group/kindnet/NetCatPod 9.18
340 TestNetworkPlugins/group/kindnet/DNS 0.13
341 TestNetworkPlugins/group/kindnet/Localhost 0.11
342 TestNetworkPlugins/group/kindnet/HairPin 0.1
343 TestNetworkPlugins/group/calico/Start 65
344 TestNetworkPlugins/group/calico/ControllerPod 6.01
345 TestNetworkPlugins/group/calico/KubeletFlags 0.28
346 TestNetworkPlugins/group/calico/NetCatPod 9.18
347 TestNetworkPlugins/group/calico/DNS 0.14
348 TestNetworkPlugins/group/calico/Localhost 0.12
349 TestNetworkPlugins/group/calico/HairPin 0.11
350 TestNetworkPlugins/group/custom-flannel/Start 54.42
351 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 13.01
352 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
353 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
354 TestStartStop/group/no-preload/serial/Pause 2.76
355 TestNetworkPlugins/group/enable-default-cni/Start 42.3
356 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 11.01
357 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
358 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
359 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.91
360 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.33
361 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.19
362 TestNetworkPlugins/group/flannel/Start 53.05
363 TestNetworkPlugins/group/custom-flannel/DNS 0.14
364 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
365 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
366 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.32
367 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.22
368 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
369 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
370 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
371 TestNetworkPlugins/group/bridge/Start 79.16
372 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
373 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.09
374 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.28
375 TestStartStop/group/embed-certs/serial/Pause 3.12
376 TestNetworkPlugins/group/flannel/ControllerPod 6.01
377 TestNetworkPlugins/group/flannel/KubeletFlags 0.27
378 TestNetworkPlugins/group/flannel/NetCatPod 8.2
379 TestNetworkPlugins/group/flannel/DNS 0.14
380 TestNetworkPlugins/group/flannel/Localhost 0.11
381 TestNetworkPlugins/group/flannel/HairPin 0.11
382 TestNetworkPlugins/group/bridge/KubeletFlags 0.27
383 TestNetworkPlugins/group/bridge/NetCatPod 8.17
384 TestNetworkPlugins/group/bridge/DNS 0.12
385 TestNetworkPlugins/group/bridge/Localhost 0.1
386 TestNetworkPlugins/group/bridge/HairPin 0.1
x
+
TestDownloadOnly/v1.16.0/json-events (4.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-128970 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-128970 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (4.318023736s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (4.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-128970
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-128970: exit status 85 (77.639341ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-128970 | jenkins | v1.32.0 | 15 Jan 24 11:36 UTC |          |
	|         | -p download-only-128970        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/15 11:36:40
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0115 11:36:40.977649  113297 out.go:296] Setting OutFile to fd 1 ...
	I0115 11:36:40.977943  113297 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 11:36:40.977953  113297 out.go:309] Setting ErrFile to fd 2...
	I0115 11:36:40.977958  113297 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 11:36:40.978132  113297 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17957-106484/.minikube/bin
	W0115 11:36:40.978258  113297 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17957-106484/.minikube/config/config.json: open /home/jenkins/minikube-integration/17957-106484/.minikube/config/config.json: no such file or directory
	I0115 11:36:40.978866  113297 out.go:303] Setting JSON to true
	I0115 11:36:40.979773  113297 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":8350,"bootTime":1705310251,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0115 11:36:40.979839  113297 start.go:138] virtualization: kvm guest
	I0115 11:36:40.982562  113297 out.go:97] [download-only-128970] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	W0115 11:36:40.982648  113297 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17957-106484/.minikube/cache/preloaded-tarball: no such file or directory
	I0115 11:36:40.984244  113297 out.go:169] MINIKUBE_LOCATION=17957
	I0115 11:36:40.982778  113297 notify.go:220] Checking for updates...
	I0115 11:36:40.987329  113297 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 11:36:40.988931  113297 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17957-106484/kubeconfig
	I0115 11:36:40.990580  113297 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17957-106484/.minikube
	I0115 11:36:40.992029  113297 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0115 11:36:40.995296  113297 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0115 11:36:40.995569  113297 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 11:36:41.019162  113297 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0115 11:36:41.019315  113297 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 11:36:41.372938  113297 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2024-01-15 11:36:41.364120481 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0115 11:36:41.373032  113297 docker.go:295] overlay module found
	I0115 11:36:41.374982  113297 out.go:97] Using the docker driver based on user configuration
	I0115 11:36:41.375008  113297 start.go:298] selected driver: docker
	I0115 11:36:41.375016  113297 start.go:902] validating driver "docker" against <nil>
	I0115 11:36:41.375097  113297 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 11:36:41.426531  113297 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2024-01-15 11:36:41.418072671 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0115 11:36:41.426690  113297 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0115 11:36:41.427163  113297 start_flags.go:392] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0115 11:36:41.427307  113297 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0115 11:36:41.429412  113297 out.go:169] Using Docker driver with root privileges
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-128970"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.16.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-128970
--- PASS: TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (4.43s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-880748 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-880748 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (4.433352994s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (4.43s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-880748
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-880748: exit status 85 (76.660077ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-128970 | jenkins | v1.32.0 | 15 Jan 24 11:36 UTC |                     |
	|         | -p download-only-128970        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 15 Jan 24 11:36 UTC | 15 Jan 24 11:36 UTC |
	| delete  | -p download-only-128970        | download-only-128970 | jenkins | v1.32.0 | 15 Jan 24 11:36 UTC | 15 Jan 24 11:36 UTC |
	| start   | -o=json --download-only        | download-only-880748 | jenkins | v1.32.0 | 15 Jan 24 11:36 UTC |                     |
	|         | -p download-only-880748        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/15 11:36:45
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0115 11:36:45.713425  113575 out.go:296] Setting OutFile to fd 1 ...
	I0115 11:36:45.713711  113575 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 11:36:45.713722  113575 out.go:309] Setting ErrFile to fd 2...
	I0115 11:36:45.713729  113575 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 11:36:45.713944  113575 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17957-106484/.minikube/bin
	I0115 11:36:45.714537  113575 out.go:303] Setting JSON to true
	I0115 11:36:45.715433  113575 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":8355,"bootTime":1705310251,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0115 11:36:45.715508  113575 start.go:138] virtualization: kvm guest
	I0115 11:36:45.717927  113575 out.go:97] [download-only-880748] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0115 11:36:45.718145  113575 notify.go:220] Checking for updates...
	I0115 11:36:45.719651  113575 out.go:169] MINIKUBE_LOCATION=17957
	I0115 11:36:45.721400  113575 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 11:36:45.723111  113575 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17957-106484/kubeconfig
	I0115 11:36:45.724677  113575 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17957-106484/.minikube
	I0115 11:36:45.726179  113575 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0115 11:36:45.728757  113575 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0115 11:36:45.728990  113575 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 11:36:45.750936  113575 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0115 11:36:45.751031  113575 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 11:36:45.806643  113575 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:39 SystemTime:2024-01-15 11:36:45.797735654 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0115 11:36:45.806742  113575 docker.go:295] overlay module found
	I0115 11:36:45.808765  113575 out.go:97] Using the docker driver based on user configuration
	I0115 11:36:45.808804  113575 start.go:298] selected driver: docker
	I0115 11:36:45.808811  113575 start.go:902] validating driver "docker" against <nil>
	I0115 11:36:45.808906  113575 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 11:36:45.861232  113575 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:39 SystemTime:2024-01-15 11:36:45.853354329 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0115 11:36:45.861385  113575 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0115 11:36:45.861855  113575 start_flags.go:392] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0115 11:36:45.861998  113575 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0115 11:36:45.864056  113575 out.go:169] Using Docker driver with root privileges
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-880748"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-880748
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (4.03s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-632060 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-632060 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (4.028593644s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (4.03s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-632060
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-632060: exit status 85 (75.805276ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-128970 | jenkins | v1.32.0 | 15 Jan 24 11:36 UTC |                     |
	|         | -p download-only-128970           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 15 Jan 24 11:36 UTC | 15 Jan 24 11:36 UTC |
	| delete  | -p download-only-128970           | download-only-128970 | jenkins | v1.32.0 | 15 Jan 24 11:36 UTC | 15 Jan 24 11:36 UTC |
	| start   | -o=json --download-only           | download-only-880748 | jenkins | v1.32.0 | 15 Jan 24 11:36 UTC |                     |
	|         | -p download-only-880748           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 15 Jan 24 11:36 UTC | 15 Jan 24 11:36 UTC |
	| delete  | -p download-only-880748           | download-only-880748 | jenkins | v1.32.0 | 15 Jan 24 11:36 UTC | 15 Jan 24 11:36 UTC |
	| start   | -o=json --download-only           | download-only-632060 | jenkins | v1.32.0 | 15 Jan 24 11:36 UTC |                     |
	|         | -p download-only-632060           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/15 11:36:50
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0115 11:36:50.573955  113853 out.go:296] Setting OutFile to fd 1 ...
	I0115 11:36:50.574189  113853 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 11:36:50.574197  113853 out.go:309] Setting ErrFile to fd 2...
	I0115 11:36:50.574202  113853 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 11:36:50.574363  113853 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17957-106484/.minikube/bin
	I0115 11:36:50.574916  113853 out.go:303] Setting JSON to true
	I0115 11:36:50.575732  113853 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":8360,"bootTime":1705310251,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0115 11:36:50.575800  113853 start.go:138] virtualization: kvm guest
	I0115 11:36:50.578143  113853 out.go:97] [download-only-632060] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0115 11:36:50.579799  113853 out.go:169] MINIKUBE_LOCATION=17957
	I0115 11:36:50.578278  113853 notify.go:220] Checking for updates...
	I0115 11:36:50.582308  113853 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 11:36:50.583587  113853 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17957-106484/kubeconfig
	I0115 11:36:50.584920  113853 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17957-106484/.minikube
	I0115 11:36:50.586148  113853 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0115 11:36:50.588870  113853 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0115 11:36:50.589078  113853 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 11:36:50.610389  113853 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0115 11:36:50.610481  113853 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 11:36:50.669505  113853 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:39 SystemTime:2024-01-15 11:36:50.660558764 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0115 11:36:50.669612  113853 docker.go:295] overlay module found
	I0115 11:36:50.671622  113853 out.go:97] Using the docker driver based on user configuration
	I0115 11:36:50.671646  113853 start.go:298] selected driver: docker
	I0115 11:36:50.671652  113853 start.go:902] validating driver "docker" against <nil>
	I0115 11:36:50.671740  113853 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 11:36:50.722347  113853 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:39 SystemTime:2024-01-15 11:36:50.714122052 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0115 11:36:50.722512  113853 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0115 11:36:50.723016  113853 start_flags.go:392] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0115 11:36:50.723165  113853 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0115 11:36:50.725086  113853 out.go:169] Using Docker driver with root privileges
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-632060"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-632060
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.29s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-788749 --alsologtostderr --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "download-docker-788749" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-788749
--- PASS: TestDownloadOnlyKic (1.29s)

                                                
                                    
x
+
TestBinaryMirror (0.74s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-483395 --alsologtostderr --binary-mirror http://127.0.0.1:33409 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-483395" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-483395
--- PASS: TestBinaryMirror (0.74s)

                                                
                                    
x
+
TestOffline (68.82s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-445977 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-445977 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd: (1m6.553676227s)
helpers_test.go:175: Cleaning up "offline-containerd-445977" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-445977
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-445977: (2.270227384s)
--- PASS: TestOffline (68.82s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-391328
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-391328: exit status 85 (63.171647ms)

                                                
                                                
-- stdout --
	* Profile "addons-391328" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-391328"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-391328
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-391328: exit status 85 (64.38378ms)

                                                
                                                
-- stdout --
	* Profile "addons-391328" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-391328"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (123.36s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-391328 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-391328 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m3.35953443s)
--- PASS: TestAddons/Setup (123.36s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.75s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 17.139442ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-mwfqp" [2f7e31d1-e2b8-448a-be07-28fbd7de6478] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004556462s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-v8hx4" [2ea4d868-9428-475d-834a-0f2a89643232] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004618525s
addons_test.go:340: (dbg) Run:  kubectl --context addons-391328 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-391328 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-391328 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.642550817s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-391328 ip
2024/01/15 11:39:13 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-391328 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (13.75s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (25.67s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-391328 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:207: (dbg) Done: kubectl --context addons-391328 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (4.326730858s)
addons_test.go:232: (dbg) Run:  kubectl --context addons-391328 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:232: (dbg) Non-zero exit: kubectl --context addons-391328 replace --force -f testdata/nginx-ingress-v1.yaml: exit status 1 (94.885992ms)

                                                
                                                
** stderr ** 
	Error from server (InternalError): Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": failed to call webhook: Post "https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1/ingresses?timeout=10s": dial tcp 10.109.201.235:443: connect: connection refused

                                                
                                                
** /stderr **
addons_test.go:232: (dbg) Run:  kubectl --context addons-391328 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-391328 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [0010c10b-9e39-422b-929c-93a5e6f7aac6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [0010c10b-9e39-422b-929c-93a5e6f7aac6] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003698807s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-391328 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-391328 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-391328 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-391328 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-391328 addons disable ingress-dns --alsologtostderr -v=1: (1.353463279s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-391328 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-391328 addons disable ingress --alsologtostderr -v=1: (7.717058146s)
--- PASS: TestAddons/parallel/Ingress (25.67s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.83s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-r58qv" [52dcdae1-1a44-4779-8f63-8f2a45dd98aa] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004410322s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-391328
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-391328: (5.82379719s)
--- PASS: TestAddons/parallel/InspektorGadget (10.83s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.71s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 3.115659ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-m2wd6" [f4ee5bd5-87e8-4e25-bcd1-f1937e734e0f] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004620776s
addons_test.go:415: (dbg) Run:  kubectl --context addons-391328 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-391328 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.71s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.16s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 15.29467ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-w8hrz" [3bdf97b7-56ee-499d-9908-6e07f69e36bc] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.004363205s
addons_test.go:473: (dbg) Run:  kubectl --context addons-391328 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-391328 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (3.582039532s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-391328 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.16s)

                                                
                                    
x
+
TestAddons/parallel/CSI (40.33s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 16.1765ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-391328 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391328 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391328 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391328 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391328 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391328 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391328 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391328 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391328 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391328 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391328 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391328 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-391328 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [d7ba5a1d-442c-432a-b033-0e2d2676c2fe] Pending
helpers_test.go:344: "task-pv-pod" [d7ba5a1d-442c-432a-b033-0e2d2676c2fe] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [d7ba5a1d-442c-432a-b033-0e2d2676c2fe] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.004421484s
addons_test.go:584: (dbg) Run:  kubectl --context addons-391328 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-391328 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-391328 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-391328 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-391328 delete pod task-pv-pod: (1.125193978s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-391328 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-391328 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391328 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391328 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-391328 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [b16af0fc-771b-4e21-bb08-4ba7485f89ab] Pending
helpers_test.go:344: "task-pv-pod-restore" [b16af0fc-771b-4e21-bb08-4ba7485f89ab] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [b16af0fc-771b-4e21-bb08-4ba7485f89ab] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004164316s
addons_test.go:626: (dbg) Run:  kubectl --context addons-391328 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-391328 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-391328 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-391328 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-391328 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.639383885s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-391328 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (40.33s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.57s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-64c8c85f65-zd4mw" [cdf8784d-3900-4e74-b802-41aaa800365d] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003221587s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-391328
--- PASS: TestAddons/parallel/CloudSpanner (5.57s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.98s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-391328 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-391328 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391328 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391328 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391328 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391328 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391328 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391328 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [1c18c0fe-d09e-4257-b9d4-6f33672fbdfd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [1c18c0fe-d09e-4257-b9d4-6f33672fbdfd] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [1c18c0fe-d09e-4257-b9d4-6f33672fbdfd] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004648171s
addons_test.go:891: (dbg) Run:  kubectl --context addons-391328 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-391328 ssh "cat /opt/local-path-provisioner/pvc-495a1e45-6730-4456-9b4a-84b12188efc5_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-391328 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-391328 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-391328 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-amd64 -p addons-391328 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.001876325s)
--- PASS: TestAddons/parallel/LocalPath (52.98s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.47s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-bcwtk" [a24b6f26-d98a-4c84-8ca9-80c67aaa3202] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004415125s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-391328
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.47s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-rvrlr" [4943aecb-ac28-48bb-b210-e15026f7bbe5] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003466859s
--- PASS: TestAddons/parallel/Yakd (6.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-391328 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-391328 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.14s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-391328
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-391328: (11.855185455s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-391328
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-391328
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-391328
--- PASS: TestAddons/StoppedEnableDisable (12.14s)

                                                
                                    
x
+
TestCertOptions (24.42s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-223523 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
E0115 12:04:00.836439  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/addons-391328/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-223523 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (21.818808127s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-223523 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-223523 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-223523 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-223523" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-223523
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-223523: (1.972253533s)
--- PASS: TestCertOptions (24.42s)

                                                
                                    
x
+
TestCertExpiration (211.7s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-595061 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-595061 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (24.962719434s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-595061 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-595061 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (4.519431631s)
helpers_test.go:175: Cleaning up "cert-expiration-595061" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-595061
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-595061: (2.216806564s)
--- PASS: TestCertExpiration (211.70s)

                                                
                                    
x
+
TestForceSystemdFlag (26.42s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-870317 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-870317 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (23.82476737s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-870317 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-870317" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-870317
E0115 12:03:46.585619  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/functional-401444/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-870317: (2.284222265s)
--- PASS: TestForceSystemdFlag (26.42s)

                                                
                                    
x
+
TestForceSystemdEnv (41.87s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-542405 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-542405 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (39.246308775s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-542405 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-542405" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-542405
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-542405: (2.286025553s)
--- PASS: TestForceSystemdEnv (41.87s)

                                                
                                    
x
+
TestDockerEnvContainerd (39.58s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux amd64
docker_test.go:181: (dbg) Run:  out/minikube-linux-amd64 start -p dockerenv-695290 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-amd64 start -p dockerenv-695290 --driver=docker  --container-runtime=containerd: (23.891037156s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-amd64 docker-env --ssh-host --ssh-add -p dockerenv-695290"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-amd64 docker-env --ssh-host --ssh-add -p dockerenv-695290": (1.177759308s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-8CbZNwcYNp3R/agent.134217" SSH_AGENT_PID="134218" DOCKER_HOST=ssh://docker@127.0.0.1:32777 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-8CbZNwcYNp3R/agent.134217" SSH_AGENT_PID="134218" DOCKER_HOST=ssh://docker@127.0.0.1:32777 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-8CbZNwcYNp3R/agent.134217" SSH_AGENT_PID="134218" DOCKER_HOST=ssh://docker@127.0.0.1:32777 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.288943754s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-8CbZNwcYNp3R/agent.134217" SSH_AGENT_PID="134218" DOCKER_HOST=ssh://docker@127.0.0.1:32777 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-695290" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p dockerenv-695290
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p dockerenv-695290: (2.146462065s)
--- PASS: TestDockerEnvContainerd (39.58s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.27s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.27s)

                                                
                                    
x
+
TestErrorSpam/setup (23.01s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-813204 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-813204 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-813204 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-813204 --driver=docker  --container-runtime=containerd: (23.008612866s)
--- PASS: TestErrorSpam/setup (23.01s)

                                                
                                    
x
+
TestErrorSpam/start (0.63s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-813204 --log_dir /tmp/nospam-813204 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-813204 --log_dir /tmp/nospam-813204 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-813204 --log_dir /tmp/nospam-813204 start --dry-run
--- PASS: TestErrorSpam/start (0.63s)

                                                
                                    
x
+
TestErrorSpam/status (0.92s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-813204 --log_dir /tmp/nospam-813204 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-813204 --log_dir /tmp/nospam-813204 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-813204 --log_dir /tmp/nospam-813204 status
--- PASS: TestErrorSpam/status (0.92s)

                                                
                                    
x
+
TestErrorSpam/pause (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-813204 --log_dir /tmp/nospam-813204 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-813204 --log_dir /tmp/nospam-813204 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-813204 --log_dir /tmp/nospam-813204 pause
--- PASS: TestErrorSpam/pause (1.53s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.54s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-813204 --log_dir /tmp/nospam-813204 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-813204 --log_dir /tmp/nospam-813204 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-813204 --log_dir /tmp/nospam-813204 unpause
--- PASS: TestErrorSpam/unpause (1.54s)

                                                
                                    
x
+
TestErrorSpam/stop (1.4s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-813204 --log_dir /tmp/nospam-813204 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-813204 --log_dir /tmp/nospam-813204 stop: (1.192829756s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-813204 --log_dir /tmp/nospam-813204 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-813204 --log_dir /tmp/nospam-813204 stop
--- PASS: TestErrorSpam/stop (1.40s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: /home/jenkins/minikube-integration/17957-106484/.minikube/files/etc/test/nested/copy/113285/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (52.52s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-linux-amd64 start -p functional-401444 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2233: (dbg) Done: out/minikube-linux-amd64 start -p functional-401444 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (52.515772972s)
--- PASS: TestFunctional/serial/StartWithProxy (52.52s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (4.87s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-401444 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-401444 --alsologtostderr -v=8: (4.869467817s)
functional_test.go:659: soft start took 4.870248856s for "functional-401444" cluster.
--- PASS: TestFunctional/serial/SoftStart (4.87s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-401444 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-401444 cache add registry.k8s.io/pause:3.1: (1.006662343s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-401444 cache add registry.k8s.io/pause:3.3: (1.133015331s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.43s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-401444 /tmp/TestFunctionalserialCacheCmdcacheadd_local3600155445/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 cache add minikube-local-cache-test:functional-401444
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-401444 cache add minikube-local-cache-test:functional-401444: (1.096842474s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 cache delete minikube-local-cache-test:functional-401444
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-401444
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.43s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.85s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-401444 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (281.712535ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.85s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 kubectl -- --context functional-401444 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-401444 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.15s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-401444 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-401444 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.14915955s)
functional_test.go:757: restart took 41.149291702s for "functional-401444" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (41.15s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-401444 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.37s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-401444 logs: (1.370205884s)
--- PASS: TestFunctional/serial/LogsCmd (1.37s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.38s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 logs --file /tmp/TestFunctionalserialLogsFileCmd3753269117/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-401444 logs --file /tmp/TestFunctionalserialLogsFileCmd3753269117/001/logs.txt: (1.377820279s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.38s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.75s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-401444 apply -f testdata/invalidsvc.yaml
functional_test.go:2334: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-401444
functional_test.go:2334: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-401444: exit status 115 (344.354012ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30918 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2326: (dbg) Run:  kubectl --context functional-401444 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.75s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-401444 config get cpus: exit status 14 (101.099238ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-401444 config get cpus: exit status 14 (95.713157ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-401444 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-401444 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 155233: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.22s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-401444 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-401444 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (228.717153ms)

                                                
                                                
-- stdout --
	* [functional-401444] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17957
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17957-106484/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17957-106484/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 11:44:06.677326  153081 out.go:296] Setting OutFile to fd 1 ...
	I0115 11:44:06.677487  153081 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 11:44:06.677498  153081 out.go:309] Setting ErrFile to fd 2...
	I0115 11:44:06.677506  153081 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 11:44:06.677750  153081 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17957-106484/.minikube/bin
	I0115 11:44:06.678438  153081 out.go:303] Setting JSON to false
	I0115 11:44:06.679750  153081 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":8796,"bootTime":1705310251,"procs":240,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0115 11:44:06.679827  153081 start.go:138] virtualization: kvm guest
	I0115 11:44:06.682366  153081 out.go:177] * [functional-401444] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0115 11:44:06.684620  153081 out.go:177]   - MINIKUBE_LOCATION=17957
	I0115 11:44:06.684621  153081 notify.go:220] Checking for updates...
	I0115 11:44:06.686266  153081 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 11:44:06.687887  153081 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17957-106484/kubeconfig
	I0115 11:44:06.689413  153081 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17957-106484/.minikube
	I0115 11:44:06.690836  153081 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0115 11:44:06.692273  153081 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0115 11:44:06.694404  153081 config.go:182] Loaded profile config "functional-401444": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 11:44:06.695183  153081 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 11:44:06.735836  153081 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0115 11:44:06.736024  153081 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 11:44:06.825571  153081 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2024-01-15 11:44:06.811658604 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0115 11:44:06.825658  153081 docker.go:295] overlay module found
	I0115 11:44:06.828111  153081 out.go:177] * Using the docker driver based on existing profile
	I0115 11:44:06.829676  153081 start.go:298] selected driver: docker
	I0115 11:44:06.829699  153081 start.go:902] validating driver "docker" against &{Name:functional-401444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-401444 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 11:44:06.829830  153081 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0115 11:44:06.832113  153081 out.go:177] 
	W0115 11:44:06.833700  153081 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0115 11:44:06.835086  153081 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-401444 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-401444 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-401444 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (169.336245ms)

                                                
                                                
-- stdout --
	* [functional-401444] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17957
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17957-106484/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17957-106484/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 11:43:58.891430  149967 out.go:296] Setting OutFile to fd 1 ...
	I0115 11:43:58.891608  149967 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 11:43:58.891618  149967 out.go:309] Setting ErrFile to fd 2...
	I0115 11:43:58.891626  149967 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 11:43:58.891927  149967 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17957-106484/.minikube/bin
	I0115 11:43:58.892509  149967 out.go:303] Setting JSON to false
	I0115 11:43:58.893535  149967 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":8788,"bootTime":1705310251,"procs":246,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0115 11:43:58.893605  149967 start.go:138] virtualization: kvm guest
	I0115 11:43:58.896279  149967 out.go:177] * [functional-401444] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I0115 11:43:58.898140  149967 notify.go:220] Checking for updates...
	I0115 11:43:58.898152  149967 out.go:177]   - MINIKUBE_LOCATION=17957
	I0115 11:43:58.899627  149967 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 11:43:58.900972  149967 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17957-106484/kubeconfig
	I0115 11:43:58.902869  149967 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17957-106484/.minikube
	I0115 11:43:58.904290  149967 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0115 11:43:58.905630  149967 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0115 11:43:58.907373  149967 config.go:182] Loaded profile config "functional-401444": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 11:43:58.907837  149967 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 11:43:58.930970  149967 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0115 11:43:58.931109  149967 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 11:43:58.988843  149967 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:48 SystemTime:2024-01-15 11:43:58.979381217 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0115 11:43:58.989021  149967 docker.go:295] overlay module found
	I0115 11:43:58.991508  149967 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0115 11:43:58.993013  149967 start.go:298] selected driver: docker
	I0115 11:43:58.993033  149967 start.go:902] validating driver "docker" against &{Name:functional-401444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-401444 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 11:43:58.993145  149967 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0115 11:43:58.995513  149967 out.go:177] 
	W0115 11:43:58.997432  149967 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0115 11:43:58.998918  149967 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-401444 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-401444 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-znfq4" [6d316c2d-b188-4e7b-be19-dbea996346fe] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-znfq4" [6d316c2d-b188-4e7b-be19-dbea996346fe] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.003697142s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:31708
functional_test.go:1674: http://192.168.49.2:31708: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-znfq4

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31708
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.56s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (33.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [fddf1da0-bc4f-4616-a2c7-6b35ffc930ab] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003873855s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-401444 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-401444 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-401444 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-401444 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [41ab3d26-5074-4256-95f0-81a959376a6e] Pending
helpers_test.go:344: "sp-pod" [41ab3d26-5074-4256-95f0-81a959376a6e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [41ab3d26-5074-4256-95f0-81a959376a6e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.004605312s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-401444 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-401444 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-401444 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ad0c8219-d94a-446a-964c-4141331e4c33] Pending
helpers_test.go:344: "sp-pod" [ad0c8219-d94a-446a-964c-4141331e4c33] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ad0c8219-d94a-446a-964c-4141331e4c33] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.004646637s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-401444 exec sp-pod -- ls /tmp/mount
E0115 11:44:21.324091  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/addons-391328/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (33.65s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 ssh -n functional-401444 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 cp functional-401444:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1247041529/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 ssh -n functional-401444 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 ssh -n functional-401444 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (20.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: (dbg) Run:  kubectl --context functional-401444 replace --force -f testdata/mysql.yaml
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-hv5hn" [8d014d76-dbef-43c7-a57f-f56124417d4c] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-hv5hn" [8d014d76-dbef-43c7-a57f-f56124417d4c] Running
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 17.004435106s
functional_test.go:1806: (dbg) Run:  kubectl --context functional-401444 exec mysql-859648c796-hv5hn -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-401444 exec mysql-859648c796-hv5hn -- mysql -ppassword -e "show databases;": exit status 1 (108.494247ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
2024/01/15 11:44:22 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1806: (dbg) Run:  kubectl --context functional-401444 exec mysql-859648c796-hv5hn -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-401444 exec mysql-859648c796-hv5hn -- mysql -ppassword -e "show databases;": exit status 1 (104.29516ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-401444 exec mysql-859648c796-hv5hn -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (20.11s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/113285/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 ssh "sudo cat /etc/test/nested/copy/113285/hosts"
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/113285.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 ssh "sudo cat /etc/ssl/certs/113285.pem"
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/113285.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 ssh "sudo cat /usr/share/ca-certificates/113285.pem"
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/1132852.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 ssh "sudo cat /etc/ssl/certs/1132852.pem"
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/1132852.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 ssh "sudo cat /usr/share/ca-certificates/1132852.pem"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
E0115 11:44:05.962619  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/addons-391328/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/CertSync (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-401444 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 ssh "sudo systemctl is-active docker"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-401444 ssh "sudo systemctl is-active docker": exit status 1 (368.334701ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 ssh "sudo systemctl is-active crio"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-401444 ssh "sudo systemctl is-active crio": exit status 1 (357.863003ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-401444 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-401444 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-401444 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-401444 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 148066: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 version -o=json --components
E0115 11:44:11.083578  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/addons-391328/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/Version/components (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-401444 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-401444
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-401444
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-401444 image ls --format short --alsologtostderr:
I0115 11:44:11.145025  155062 out.go:296] Setting OutFile to fd 1 ...
I0115 11:44:11.145312  155062 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 11:44:11.145324  155062 out.go:309] Setting ErrFile to fd 2...
I0115 11:44:11.145332  155062 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 11:44:11.145516  155062 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17957-106484/.minikube/bin
I0115 11:44:11.146134  155062 config.go:182] Loaded profile config "functional-401444": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0115 11:44:11.146259  155062 config.go:182] Loaded profile config "functional-401444": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0115 11:44:11.146681  155062 cli_runner.go:164] Run: docker container inspect functional-401444 --format={{.State.Status}}
I0115 11:44:11.166545  155062 ssh_runner.go:195] Run: systemctl --version
I0115 11:44:11.166631  155062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-401444
I0115 11:44:11.186191  155062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17957-106484/.minikube/machines/functional-401444/id_rsa Username:docker}
I0115 11:44:11.340785  155062 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-401444 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
| docker.io/library/mysql                     | 5.7                | sha256:510733 | 138MB  |
| registry.k8s.io/etcd                        | 3.5.9-0            | sha256:73deb9 | 103MB  |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
| registry.k8s.io/kube-apiserver              | v1.28.4            | sha256:7fe0e6 | 34.7MB |
| registry.k8s.io/pause                       | 3.9                | sha256:e6f181 | 322kB  |
| gcr.io/google-containers/addon-resizer      | functional-401444  | sha256:ffd4cf | 10.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| localhost/my-image                          | functional-401444  | sha256:a8e286 | 775kB  |
| registry.k8s.io/coredns/coredns             | v1.10.1            | sha256:ead0a4 | 16.2MB |
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
| registry.k8s.io/kube-scheduler              | v1.28.4            | sha256:e3db31 | 18.8MB |
| docker.io/library/nginx                     | latest             | sha256:a87587 | 70.5MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| registry.k8s.io/kube-controller-manager     | v1.28.4            | sha256:d058aa | 33.4MB |
| registry.k8s.io/kube-proxy                  | v1.28.4            | sha256:83f6cc | 24.6MB |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
| docker.io/kindest/kindnetd                  | v20230809-80a64d96 | sha256:c7d129 | 27.7MB |
| docker.io/library/minikube-local-cache-test | functional-401444  | sha256:e4bd6f | 1.01kB |
| docker.io/library/nginx                     | alpine             | sha256:529b56 | 18MB   |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-401444 image ls --format table --alsologtostderr:
I0115 11:44:15.578501  155819 out.go:296] Setting OutFile to fd 1 ...
I0115 11:44:15.578645  155819 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 11:44:15.578656  155819 out.go:309] Setting ErrFile to fd 2...
I0115 11:44:15.578663  155819 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 11:44:15.578953  155819 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17957-106484/.minikube/bin
I0115 11:44:15.579778  155819 config.go:182] Loaded profile config "functional-401444": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0115 11:44:15.579981  155819 config.go:182] Loaded profile config "functional-401444": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0115 11:44:15.580666  155819 cli_runner.go:164] Run: docker container inspect functional-401444 --format={{.State.Status}}
I0115 11:44:15.598599  155819 ssh_runner.go:195] Run: systemctl --version
I0115 11:44:15.598646  155819 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-401444
I0115 11:44:15.616498  155819 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17957-106484/.minikube/machines/functional-401444/id_rsa Username:docker}
I0115 11:44:15.712510  155819 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-401444 image ls --format json --alsologtostderr:
[{"id":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"33420443"},{"id":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"18834488"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"321520"},{"id":"sha256:7fe0e6f37db33464725e616a12ccc4
e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"34683820"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052"],"repoTags":["
docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"27737299"},{"id":"sha256:529b5644c430c06553d2e8082c6713fe19a4169c9dc2369cbb960081f52924ff","repoDigests":["docker.io/library/nginx@sha256:a59278fd22a9d411121e190b8cec8aa57b306aa3332459197777583beb728f59"],"repoTags":["docker.io/library/nginx:alpine"],"size":"17978594"},{"id":"sha256:a8e286771ad1fe172701f3b6cdc99421cf87bf999c88ea213aad05b659e3c78f","repoDigests":[],"repoTags":["localhost/my-image:functional-401444"],"size":"774902"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"102894559"},{"id":"sha256:5
107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb"],"repoTags":["docker.io/library/mysql:5.7"],"size":"137909886"},{"id":"sha256:a8758716bb6aa4d90071160d27028fe4eaee7ce8166221a97d30440c8eac2be6","repoDigests":["docker.io/library/nginx@sha256:4c0fdaa8b6341bfdeca5f18f7837462c80cff90527ee35ef185571e1c327beac"],"repoTags":["docker.io/library/nginx:latest"],"size":"70520324"},{"id":"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-401444"],"size":"10823156"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:ead0a4a53df89fd173874b46093b6e62d8c
72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"16190758"},{"id":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":["registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"24581402"},{"id":"sha256:e4bd6fc7241b339f27c9fd0f792b16170aa4fdb123c38830cb5a5e992908a175","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-401444"],"size":"1007"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-401444 image ls --format json --alsologtostderr:
I0115 11:44:15.343357  155741 out.go:296] Setting OutFile to fd 1 ...
I0115 11:44:15.343625  155741 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 11:44:15.343636  155741 out.go:309] Setting ErrFile to fd 2...
I0115 11:44:15.343640  155741 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 11:44:15.343883  155741 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17957-106484/.minikube/bin
I0115 11:44:15.344581  155741 config.go:182] Loaded profile config "functional-401444": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0115 11:44:15.344704  155741 config.go:182] Loaded profile config "functional-401444": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0115 11:44:15.345144  155741 cli_runner.go:164] Run: docker container inspect functional-401444 --format={{.State.Status}}
I0115 11:44:15.362080  155741 ssh_runner.go:195] Run: systemctl --version
I0115 11:44:15.362142  155741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-401444
I0115 11:44:15.379365  155741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17957-106484/.minikube/machines/functional-401444/id_rsa Username:docker}
I0115 11:44:15.472533  155741 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-401444 image ls --format yaml --alsologtostderr:
- id: sha256:e4bd6fc7241b339f27c9fd0f792b16170aa4fdb123c38830cb5a5e992908a175
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-401444
size: "1007"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "102894559"
- id: sha256:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "27737299"
- id: sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "18834488"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "321520"
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-401444
size: "10823156"
- id: sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "16190758"
- id: sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "34683820"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:529b5644c430c06553d2e8082c6713fe19a4169c9dc2369cbb960081f52924ff
repoDigests:
- docker.io/library/nginx@sha256:a59278fd22a9d411121e190b8cec8aa57b306aa3332459197777583beb728f59
repoTags:
- docker.io/library/nginx:alpine
size: "17978594"
- id: sha256:a8758716bb6aa4d90071160d27028fe4eaee7ce8166221a97d30440c8eac2be6
repoDigests:
- docker.io/library/nginx@sha256:4c0fdaa8b6341bfdeca5f18f7837462c80cff90527ee35ef185571e1c327beac
repoTags:
- docker.io/library/nginx:latest
size: "70520324"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "33420443"
- id: sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "24581402"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-401444 image ls --format yaml --alsologtostderr:
I0115 11:44:11.451709  155107 out.go:296] Setting OutFile to fd 1 ...
I0115 11:44:11.451973  155107 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 11:44:11.451983  155107 out.go:309] Setting ErrFile to fd 2...
I0115 11:44:11.451988  155107 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 11:44:11.452263  155107 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17957-106484/.minikube/bin
I0115 11:44:11.452886  155107 config.go:182] Loaded profile config "functional-401444": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0115 11:44:11.452993  155107 config.go:182] Loaded profile config "functional-401444": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0115 11:44:11.453395  155107 cli_runner.go:164] Run: docker container inspect functional-401444 --format={{.State.Status}}
I0115 11:44:11.473324  155107 ssh_runner.go:195] Run: systemctl --version
I0115 11:44:11.473399  155107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-401444
I0115 11:44:11.492019  155107 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17957-106484/.minikube/machines/functional-401444/id_rsa Username:docker}
I0115 11:44:11.640814  155107 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-401444 ssh pgrep buildkitd: exit status 1 (394.661653ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 image build -t localhost/my-image:functional-401444 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-401444 image build -t localhost/my-image:functional-401444 testdata/build --alsologtostderr: (2.844179678s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-401444 image build -t localhost/my-image:functional-401444 testdata/build --alsologtostderr:
I0115 11:44:12.211978  155243 out.go:296] Setting OutFile to fd 1 ...
I0115 11:44:12.212205  155243 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 11:44:12.212218  155243 out.go:309] Setting ErrFile to fd 2...
I0115 11:44:12.212225  155243 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 11:44:12.212569  155243 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17957-106484/.minikube/bin
I0115 11:44:12.213500  155243 config.go:182] Loaded profile config "functional-401444": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0115 11:44:12.213999  155243 config.go:182] Loaded profile config "functional-401444": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0115 11:44:12.214449  155243 cli_runner.go:164] Run: docker container inspect functional-401444 --format={{.State.Status}}
I0115 11:44:12.232445  155243 ssh_runner.go:195] Run: systemctl --version
I0115 11:44:12.232493  155243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-401444
I0115 11:44:12.251653  155243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17957-106484/.minikube/machines/functional-401444/id_rsa Username:docker}
I0115 11:44:12.440410  155243 build_images.go:151] Building image from path: /tmp/build.3282235247.tar
I0115 11:44:12.440490  155243 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0115 11:44:12.450027  155243 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3282235247.tar
I0115 11:44:12.453470  155243 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3282235247.tar: stat -c "%s %y" /var/lib/minikube/build/build.3282235247.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3282235247.tar': No such file or directory
I0115 11:44:12.453500  155243 ssh_runner.go:362] scp /tmp/build.3282235247.tar --> /var/lib/minikube/build/build.3282235247.tar (3072 bytes)
I0115 11:44:12.481463  155243 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3282235247
I0115 11:44:12.489833  155243 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3282235247 -xf /var/lib/minikube/build/build.3282235247.tar
I0115 11:44:12.536917  155243 containerd.go:379] Building image: /var/lib/minikube/build/build.3282235247
I0115 11:44:12.536998  155243 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3282235247 --local dockerfile=/var/lib/minikube/build/build.3282235247 --output type=image,name=localhost/my-image:functional-401444
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.3s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.1s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.2s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 1.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.1s done
#8 exporting manifest sha256:e62a9fa8cbc907f13c6c65e4878cad0f7e2028641a7037aada16b85ce776754f done
#8 exporting config sha256:a8e286771ad1fe172701f3b6cdc99421cf87bf999c88ea213aad05b659e3c78f done
#8 naming to localhost/my-image:functional-401444 done
#8 DONE 0.1s
I0115 11:44:14.965083  155243 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3282235247 --local dockerfile=/var/lib/minikube/build/build.3282235247 --output type=image,name=localhost/my-image:functional-401444: (2.428046464s)
I0115 11:44:14.965225  155243 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3282235247
I0115 11:44:14.973706  155243 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3282235247.tar
I0115 11:44:14.981315  155243 build_images.go:207] Built localhost/my-image:functional-401444 from /tmp/build.3282235247.tar
I0115 11:44:14.981353  155243 build_images.go:123] succeeded building to: functional-401444
I0115 11:44:14.981360  155243 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-401444
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-401444 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-401444 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [669bcd4d-fc2c-4342-9bdd-cbac478dcb26] Pending
helpers_test.go:344: "nginx-svc" [669bcd4d-fc2c-4342-9bdd-cbac478dcb26] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [669bcd4d-fc2c-4342-9bdd-cbac478dcb26] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.004355982s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.30s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 image load --daemon gcr.io/google-containers/addon-resizer:functional-401444 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-401444 image load --daemon gcr.io/google-containers/addon-resizer:functional-401444 --alsologtostderr: (4.415669453s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.63s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "349.52716ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "78.184434ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "325.297149ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "65.259058ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 image load --daemon gcr.io/google-containers/addon-resizer:functional-401444 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-401444 image load --daemon gcr.io/google-containers/addon-resizer:functional-401444 --alsologtostderr: (3.078786273s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.39s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-401444 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.107.131.85 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-401444 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-401444 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-401444 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-mr6fc" [3768a868-f4d1-409a-a3c9-70eb2d52fa80] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-mr6fc" [3768a868-f4d1-409a-a3c9-70eb2d52fa80] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.003884706s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-401444
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 image load --daemon gcr.io/google-containers/addon-resizer:functional-401444 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-401444 image load --daemon gcr.io/google-containers/addon-resizer:functional-401444 --alsologtostderr: (3.681794449s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.70s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-401444 /tmp/TestFunctionalparallelMountCmdany-port612185085/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1705319039005572510" to /tmp/TestFunctionalparallelMountCmdany-port612185085/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1705319039005572510" to /tmp/TestFunctionalparallelMountCmdany-port612185085/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1705319039005572510" to /tmp/TestFunctionalparallelMountCmdany-port612185085/001/test-1705319039005572510
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-401444 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (318.362312ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 15 11:43 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 15 11:43 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 15 11:43 test-1705319039005572510
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 ssh cat /mount-9p/test-1705319039005572510
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-401444 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [75fe4616-f83c-485a-a152-8b7c1942621c] Pending
E0115 11:44:00.836797  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/addons-391328/client.crt: no such file or directory
helpers_test.go:344: "busybox-mount" [75fe4616-f83c-485a-a152-8b7c1942621c] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
E0115 11:44:02.121164  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/addons-391328/client.crt: no such file or directory
helpers_test.go:344: "busybox-mount" [75fe4616-f83c-485a-a152-8b7c1942621c] Running
helpers_test.go:344: "busybox-mount" [75fe4616-f83c-485a-a152-8b7c1942621c] Running: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [75fe4616-f83c-485a-a152-8b7c1942621c] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003981913s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-401444 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-401444 /tmp/TestFunctionalparallelMountCmdany-port612185085/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 image save gcr.io/google-containers/addon-resizer:functional-401444 /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 image rm gcr.io/google-containers/addon-resizer:functional-401444 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 image ls
E0115 11:44:00.843243  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/addons-391328/client.crt: no such file or directory
E0115 11:44:00.853513  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/addons-391328/client.crt: no such file or directory
E0115 11:44:00.873793  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/addons-391328/client.crt: no such file or directory
E0115 11:44:00.914632  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/addons-391328/client.crt: no such file or directory
E0115 11:44:00.994962  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/addons-391328/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
E0115 11:44:01.159197  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/addons-391328/client.crt: no such file or directory
E0115 11:44:01.479985  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/addons-391328/client.crt: no such file or directory
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-401444 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (1.281185939s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-401444
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 image save --daemon gcr.io/google-containers/addon-resizer:functional-401444 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-401444
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 service list -o json
functional_test.go:1493: Took "547.277302ms" to run "out/minikube-linux-amd64 -p functional-401444 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 service --namespace=default --https --url hello-node
E0115 11:44:03.401871  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/addons-391328/client.crt: no such file or directory
functional_test.go:1521: found endpoint: https://192.168.49.2:32695
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:32695
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-401444 /tmp/TestFunctionalparallelMountCmdspecific-port475258197/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-401444 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (311.238659ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-401444 /tmp/TestFunctionalparallelMountCmdspecific-port475258197/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-401444 ssh "sudo umount -f /mount-9p": exit status 1 (304.121743ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-401444 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-401444 /tmp/TestFunctionalparallelMountCmdspecific-port475258197/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-401444 /tmp/TestFunctionalparallelMountCmdVerifyCleanup355652822/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-401444 /tmp/TestFunctionalparallelMountCmdVerifyCleanup355652822/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-401444 /tmp/TestFunctionalparallelMountCmdVerifyCleanup355652822/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-401444 ssh "findmnt -T" /mount1: exit status 1 (438.845529ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-401444 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-401444 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-401444 /tmp/TestFunctionalparallelMountCmdVerifyCleanup355652822/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-401444 /tmp/TestFunctionalparallelMountCmdVerifyCleanup355652822/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-401444 /tmp/TestFunctionalparallelMountCmdVerifyCleanup355652822/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.05s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-401444
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-401444
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-401444
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (68.01s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-139528 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E0115 11:44:41.804325  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/addons-391328/client.crt: no such file or directory
E0115 11:45:22.765456  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/addons-391328/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-139528 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (1m8.011796013s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (68.01s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (8.82s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-139528 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-139528 addons enable ingress --alsologtostderr -v=5: (8.823267069s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (8.82s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.6s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-139528 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.60s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (29.56s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:207: (dbg) Run:  kubectl --context ingress-addon-legacy-139528 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:207: (dbg) Done: kubectl --context ingress-addon-legacy-139528 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (8.66406915s)
addons_test.go:232: (dbg) Run:  kubectl --context ingress-addon-legacy-139528 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context ingress-addon-legacy-139528 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [f0baa9d3-0470-4a47-9d75-709d2308e63d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [f0baa9d3-0470-4a47-9d75-709d2308e63d] Running
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.003372211s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-139528 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context ingress-addon-legacy-139528 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-139528 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-139528 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-139528 addons disable ingress-dns --alsologtostderr -v=1: (3.343020509s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-139528 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-139528 addons disable ingress --alsologtostderr -v=1: (7.400158174s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (29.56s)

                                                
                                    
x
+
TestJSONOutput/start/Command (53.27s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-322504 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0115 11:46:44.688298  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/addons-391328/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-322504 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (53.269553308s)
--- PASS: TestJSONOutput/start/Command (53.27s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-322504 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-322504 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.62s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-322504 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-322504 --output=json --user=testUser: (5.620962438s)
--- PASS: TestJSONOutput/stop/Command (5.62s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-589543 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-589543 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (81.792181ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cfccbce3-7814-471c-8f8f-c4c04066b3c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-589543] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7fc5e514-7193-4337-86e3-fd41ebe62817","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17957"}}
	{"specversion":"1.0","id":"2d599a62-7363-4c1f-8bfe-ec1ac59df15f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"50bee8b3-6a78-4411-be4d-618e07756827","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17957-106484/kubeconfig"}}
	{"specversion":"1.0","id":"5fcdcbae-6144-4291-8914-ba4adbea68e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17957-106484/.minikube"}}
	{"specversion":"1.0","id":"d281dcdd-0548-4a8b-8d1e-8bfb2d284da0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"e0c3b09d-66f2-4985-bf9e-07c4523f497a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0c3a325b-c07a-49d2-a2a6-acb86c368f90","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-589543" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-589543
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (28.28s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-645988 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-645988 --network=: (26.615541389s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-645988" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-645988
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-645988: (1.649741196s)
--- PASS: TestKicCustomNetwork/create_custom_network (28.28s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (23.91s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-836822 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-836822 --network=bridge: (21.984055185s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-836822" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-836822
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-836822: (1.912156147s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (23.91s)

                                                
                                    
x
+
TestKicExistingNetwork (24.21s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-428111 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-428111 --network=existing-network: (22.161184426s)
helpers_test.go:175: Cleaning up "existing-network-428111" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-428111
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-428111: (1.91162797s)
--- PASS: TestKicExistingNetwork (24.21s)

                                                
                                    
x
+
TestKicCustomSubnet (27.28s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-461790 --subnet=192.168.60.0/24
E0115 11:48:46.585386  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/functional-401444/client.crt: no such file or directory
E0115 11:48:46.590657  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/functional-401444/client.crt: no such file or directory
E0115 11:48:46.600879  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/functional-401444/client.crt: no such file or directory
E0115 11:48:46.621140  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/functional-401444/client.crt: no such file or directory
E0115 11:48:46.661520  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/functional-401444/client.crt: no such file or directory
E0115 11:48:46.741902  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/functional-401444/client.crt: no such file or directory
E0115 11:48:46.902343  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/functional-401444/client.crt: no such file or directory
E0115 11:48:47.223009  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/functional-401444/client.crt: no such file or directory
E0115 11:48:47.864001  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/functional-401444/client.crt: no such file or directory
E0115 11:48:49.144318  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/functional-401444/client.crt: no such file or directory
E0115 11:48:51.704532  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/functional-401444/client.crt: no such file or directory
E0115 11:48:56.825627  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/functional-401444/client.crt: no such file or directory
E0115 11:49:00.835988  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/addons-391328/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-461790 --subnet=192.168.60.0/24: (25.293811287s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-461790 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-461790" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-461790
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-461790: (1.966392343s)
--- PASS: TestKicCustomSubnet (27.28s)

                                                
                                    
x
+
TestKicStaticIP (24.78s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-916472 --static-ip=192.168.200.200
E0115 11:49:07.065999  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/functional-401444/client.crt: no such file or directory
E0115 11:49:27.546585  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/functional-401444/client.crt: no such file or directory
E0115 11:49:28.528735  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/addons-391328/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-916472 --static-ip=192.168.200.200: (22.595168392s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-916472 ip
helpers_test.go:175: Cleaning up "static-ip-916472" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-916472
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-916472: (2.044479861s)
--- PASS: TestKicStaticIP (24.78s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (49.14s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-495510 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-495510 --driver=docker  --container-runtime=containerd: (23.91217844s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-498958 --driver=docker  --container-runtime=containerd
E0115 11:50:08.507139  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/functional-401444/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-498958 --driver=docker  --container-runtime=containerd: (20.462650191s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-495510
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-498958
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-498958" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-498958
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-498958: (1.849642074s)
helpers_test.go:175: Cleaning up "first-495510" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-495510
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-495510: (1.863528912s)
--- PASS: TestMinikubeProfile (49.14s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.65s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-805624 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-805624 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.653285341s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-805624 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (4.79s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-820052 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-820052 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (3.789748376s)
--- PASS: TestMountStart/serial/StartWithMountSecond (4.79s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-820052 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.59s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-805624 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-805624 --alsologtostderr -v=5: (1.591346749s)
--- PASS: TestMountStart/serial/DeleteFirst (1.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-820052 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.18s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-820052
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-820052: (1.180790665s)
--- PASS: TestMountStart/serial/Stop (1.18s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (6.66s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-820052
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-820052: (5.660253426s)
--- PASS: TestMountStart/serial/RestartStopped (6.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-820052 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (70.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-874974 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0115 11:50:45.269968  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/ingress-addon-legacy-139528/client.crt: no such file or directory
E0115 11:50:45.275325  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/ingress-addon-legacy-139528/client.crt: no such file or directory
E0115 11:50:45.285850  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/ingress-addon-legacy-139528/client.crt: no such file or directory
E0115 11:50:45.306209  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/ingress-addon-legacy-139528/client.crt: no such file or directory
E0115 11:50:45.348016  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/ingress-addon-legacy-139528/client.crt: no such file or directory
E0115 11:50:45.428430  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/ingress-addon-legacy-139528/client.crt: no such file or directory
E0115 11:50:45.588858  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/ingress-addon-legacy-139528/client.crt: no such file or directory
E0115 11:50:45.909876  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/ingress-addon-legacy-139528/client.crt: no such file or directory
E0115 11:50:46.550818  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/ingress-addon-legacy-139528/client.crt: no such file or directory
E0115 11:50:47.831235  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/ingress-addon-legacy-139528/client.crt: no such file or directory
E0115 11:50:50.392044  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/ingress-addon-legacy-139528/client.crt: no such file or directory
E0115 11:50:55.513241  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/ingress-addon-legacy-139528/client.crt: no such file or directory
E0115 11:51:05.754144  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/ingress-addon-legacy-139528/client.crt: no such file or directory
E0115 11:51:26.234510  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/ingress-addon-legacy-139528/client.crt: no such file or directory
E0115 11:51:30.428174  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/functional-401444/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p multinode-874974 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m10.335668015s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874974 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (70.80s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (8.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-874974 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-874974 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-874974 -- rollout status deployment/busybox: (1.713558127s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-874974 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-874974 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-874974 -- exec busybox-5bc68d56bd-dp7gv -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-874974 -- exec busybox-5bc68d56bd-dp7gv -- nslookup kubernetes.io: (5.232535107s)
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-874974 -- exec busybox-5bc68d56bd-tbb9q -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-874974 -- exec busybox-5bc68d56bd-dp7gv -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-874974 -- exec busybox-5bc68d56bd-tbb9q -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-874974 -- exec busybox-5bc68d56bd-dp7gv -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-874974 -- exec busybox-5bc68d56bd-tbb9q -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (8.31s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-874974 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-874974 -- exec busybox-5bc68d56bd-dp7gv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-874974 -- exec busybox-5bc68d56bd-dp7gv -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-874974 -- exec busybox-5bc68d56bd-tbb9q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-874974 -- exec busybox-5bc68d56bd-tbb9q -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (20.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-874974 -v 3 --alsologtostderr
E0115 11:52:07.195034  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/ingress-addon-legacy-139528/client.crt: no such file or directory
multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-874974 -v 3 --alsologtostderr: (19.540224702s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874974 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (20.15s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-874974 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874974 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874974 cp testdata/cp-test.txt multinode-874974:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874974 ssh -n multinode-874974 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874974 cp multinode-874974:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3417522190/001/cp-test_multinode-874974.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874974 ssh -n multinode-874974 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874974 cp multinode-874974:/home/docker/cp-test.txt multinode-874974-m02:/home/docker/cp-test_multinode-874974_multinode-874974-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874974 ssh -n multinode-874974 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874974 ssh -n multinode-874974-m02 "sudo cat /home/docker/cp-test_multinode-874974_multinode-874974-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874974 cp multinode-874974:/home/docker/cp-test.txt multinode-874974-m03:/home/docker/cp-test_multinode-874974_multinode-874974-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874974 ssh -n multinode-874974 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874974 ssh -n multinode-874974-m03 "sudo cat /home/docker/cp-test_multinode-874974_multinode-874974-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874974 cp testdata/cp-test.txt multinode-874974-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874974 ssh -n multinode-874974-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874974 cp multinode-874974-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3417522190/001/cp-test_multinode-874974-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874974 ssh -n multinode-874974-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874974 cp multinode-874974-m02:/home/docker/cp-test.txt multinode-874974:/home/docker/cp-test_multinode-874974-m02_multinode-874974.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874974 ssh -n multinode-874974-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874974 ssh -n multinode-874974 "sudo cat /home/docker/cp-test_multinode-874974-m02_multinode-874974.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874974 cp multinode-874974-m02:/home/docker/cp-test.txt multinode-874974-m03:/home/docker/cp-test_multinode-874974-m02_multinode-874974-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874974 ssh -n multinode-874974-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874974 ssh -n multinode-874974-m03 "sudo cat /home/docker/cp-test_multinode-874974-m02_multinode-874974-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874974 cp testdata/cp-test.txt multinode-874974-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874974 ssh -n multinode-874974-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874974 cp multinode-874974-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3417522190/001/cp-test_multinode-874974-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874974 ssh -n multinode-874974-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874974 cp multinode-874974-m03:/home/docker/cp-test.txt multinode-874974:/home/docker/cp-test_multinode-874974-m03_multinode-874974.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874974 ssh -n multinode-874974-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874974 ssh -n multinode-874974 "sudo cat /home/docker/cp-test_multinode-874974-m03_multinode-874974.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874974 cp multinode-874974-m03:/home/docker/cp-test.txt multinode-874974-m02:/home/docker/cp-test_multinode-874974-m03_multinode-874974-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874974 ssh -n multinode-874974-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874974 ssh -n multinode-874974-m02 "sudo cat /home/docker/cp-test_multinode-874974-m03_multinode-874974-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.47s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874974 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-amd64 -p multinode-874974 node stop m03: (1.184144751s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874974 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-874974 status: exit status 7 (482.946131ms)

                                                
                                                
-- stdout --
	multinode-874974
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-874974-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-874974-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874974 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-874974 status --alsologtostderr: exit status 7 (480.048331ms)

                                                
                                                
-- stdout --
	multinode-874974
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-874974-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-874974-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 11:52:36.378535  212199 out.go:296] Setting OutFile to fd 1 ...
	I0115 11:52:36.378806  212199 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 11:52:36.378816  212199 out.go:309] Setting ErrFile to fd 2...
	I0115 11:52:36.378820  212199 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 11:52:36.379018  212199 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17957-106484/.minikube/bin
	I0115 11:52:36.379223  212199 out.go:303] Setting JSON to false
	I0115 11:52:36.379254  212199 mustload.go:65] Loading cluster: multinode-874974
	I0115 11:52:36.379312  212199 notify.go:220] Checking for updates...
	I0115 11:52:36.379637  212199 config.go:182] Loaded profile config "multinode-874974": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 11:52:36.379650  212199 status.go:255] checking status of multinode-874974 ...
	I0115 11:52:36.380067  212199 cli_runner.go:164] Run: docker container inspect multinode-874974 --format={{.State.Status}}
	I0115 11:52:36.397332  212199 status.go:330] multinode-874974 host status = "Running" (err=<nil>)
	I0115 11:52:36.397376  212199 host.go:66] Checking if "multinode-874974" exists ...
	I0115 11:52:36.397644  212199 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-874974
	I0115 11:52:36.415186  212199 host.go:66] Checking if "multinode-874974" exists ...
	I0115 11:52:36.415425  212199 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 11:52:36.415465  212199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-874974
	I0115 11:52:36.431760  212199 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17957-106484/.minikube/machines/multinode-874974/id_rsa Username:docker}
	I0115 11:52:36.525098  212199 ssh_runner.go:195] Run: systemctl --version
	I0115 11:52:36.529104  212199 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 11:52:36.539179  212199 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 11:52:36.592826  212199 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:55 SystemTime:2024-01-15 11:52:36.583501143 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0115 11:52:36.593450  212199 kubeconfig.go:92] found "multinode-874974" server: "https://192.168.58.2:8443"
	I0115 11:52:36.593475  212199 api_server.go:166] Checking apiserver status ...
	I0115 11:52:36.593507  212199 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 11:52:36.604703  212199 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1439/cgroup
	I0115 11:52:36.613689  212199 api_server.go:182] apiserver freezer: "2:freezer:/docker/87b7f0e0c635345d3ecd914d8c7e9c5afcf47309f9379587c76d10f27f8a0e51/kubepods/burstable/podb75ae99360f036dc0895890b17d8241f/888647275eff13edd12162aa5cd89012ef3aca3b3aa04cbf3f80dc3070091d4b"
	I0115 11:52:36.613751  212199 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/87b7f0e0c635345d3ecd914d8c7e9c5afcf47309f9379587c76d10f27f8a0e51/kubepods/burstable/podb75ae99360f036dc0895890b17d8241f/888647275eff13edd12162aa5cd89012ef3aca3b3aa04cbf3f80dc3070091d4b/freezer.state
	I0115 11:52:36.621157  212199 api_server.go:204] freezer state: "THAWED"
	I0115 11:52:36.621184  212199 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0115 11:52:36.626173  212199 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0115 11:52:36.626195  212199 status.go:421] multinode-874974 apiserver status = Running (err=<nil>)
	I0115 11:52:36.626211  212199 status.go:257] multinode-874974 status: &{Name:multinode-874974 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0115 11:52:36.626227  212199 status.go:255] checking status of multinode-874974-m02 ...
	I0115 11:52:36.626452  212199 cli_runner.go:164] Run: docker container inspect multinode-874974-m02 --format={{.State.Status}}
	I0115 11:52:36.643673  212199 status.go:330] multinode-874974-m02 host status = "Running" (err=<nil>)
	I0115 11:52:36.643697  212199 host.go:66] Checking if "multinode-874974-m02" exists ...
	I0115 11:52:36.643945  212199 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-874974-m02
	I0115 11:52:36.660259  212199 host.go:66] Checking if "multinode-874974-m02" exists ...
	I0115 11:52:36.660535  212199 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 11:52:36.660581  212199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-874974-m02
	I0115 11:52:36.676985  212199 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/17957-106484/.minikube/machines/multinode-874974-m02/id_rsa Username:docker}
	I0115 11:52:36.768939  212199 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 11:52:36.778877  212199 status.go:257] multinode-874974-m02 status: &{Name:multinode-874974-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0115 11:52:36.778913  212199 status.go:255] checking status of multinode-874974-m03 ...
	I0115 11:52:36.779178  212199 cli_runner.go:164] Run: docker container inspect multinode-874974-m03 --format={{.State.Status}}
	I0115 11:52:36.796603  212199 status.go:330] multinode-874974-m03 host status = "Stopped" (err=<nil>)
	I0115 11:52:36.796631  212199 status.go:343] host is not running, skipping remaining checks
	I0115 11:52:36.796640  212199 status.go:257] multinode-874974-m03 status: &{Name:multinode-874974-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.15s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (11.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:272: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874974 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-874974 node start m03 --alsologtostderr: (10.440568487s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874974 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (11.14s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (111.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-874974
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-874974
multinode_test.go:318: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-874974: (24.704110506s)
multinode_test.go:323: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-874974 --wait=true -v=8 --alsologtostderr
E0115 11:53:29.115615  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/ingress-addon-legacy-139528/client.crt: no such file or directory
E0115 11:53:46.585800  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/functional-401444/client.crt: no such file or directory
E0115 11:54:00.836221  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/addons-391328/client.crt: no such file or directory
E0115 11:54:14.269094  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/functional-401444/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-amd64 start -p multinode-874974 --wait=true -v=8 --alsologtostderr: (1m26.669671438s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-874974
--- PASS: TestMultiNode/serial/RestartKeepsNodes (111.50s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874974 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p multinode-874974 node delete m03: (4.087373616s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874974 status --alsologtostderr
multinode_test.go:442: (dbg) Run:  docker volume ls
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.68s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874974 stop
multinode_test.go:342: (dbg) Done: out/minikube-linux-amd64 -p multinode-874974 stop: (23.596921276s)
multinode_test.go:348: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874974 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-874974 status: exit status 7 (100.0141ms)

                                                
                                                
-- stdout --
	multinode-874974
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-874974-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874974 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-874974 status --alsologtostderr: exit status 7 (96.023319ms)

                                                
                                                
-- stdout --
	multinode-874974
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-874974-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 11:55:07.873704  222420 out.go:296] Setting OutFile to fd 1 ...
	I0115 11:55:07.873970  222420 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 11:55:07.873979  222420 out.go:309] Setting ErrFile to fd 2...
	I0115 11:55:07.873984  222420 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 11:55:07.874186  222420 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17957-106484/.minikube/bin
	I0115 11:55:07.874359  222420 out.go:303] Setting JSON to false
	I0115 11:55:07.874385  222420 mustload.go:65] Loading cluster: multinode-874974
	I0115 11:55:07.874496  222420 notify.go:220] Checking for updates...
	I0115 11:55:07.874770  222420 config.go:182] Loaded profile config "multinode-874974": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 11:55:07.874785  222420 status.go:255] checking status of multinode-874974 ...
	I0115 11:55:07.875217  222420 cli_runner.go:164] Run: docker container inspect multinode-874974 --format={{.State.Status}}
	I0115 11:55:07.892144  222420 status.go:330] multinode-874974 host status = "Stopped" (err=<nil>)
	I0115 11:55:07.892210  222420 status.go:343] host is not running, skipping remaining checks
	I0115 11:55:07.892217  222420 status.go:257] multinode-874974 status: &{Name:multinode-874974 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0115 11:55:07.892245  222420 status.go:255] checking status of multinode-874974-m02 ...
	I0115 11:55:07.892502  222420 cli_runner.go:164] Run: docker container inspect multinode-874974-m02 --format={{.State.Status}}
	I0115 11:55:07.908778  222420 status.go:330] multinode-874974-m02 host status = "Stopped" (err=<nil>)
	I0115 11:55:07.908802  222420 status.go:343] host is not running, skipping remaining checks
	I0115 11:55:07.908808  222420 status.go:257] multinode-874974-m02 status: &{Name:multinode-874974-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.79s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (78.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:372: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-874974 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0115 11:55:45.270920  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/ingress-addon-legacy-139528/client.crt: no such file or directory
E0115 11:56:12.956187  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/ingress-addon-legacy-139528/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-linux-amd64 start -p multinode-874974 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m17.449626364s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874974 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (78.05s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (25.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-874974
multinode_test.go:480: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-874974-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-874974-m02 --driver=docker  --container-runtime=containerd: exit status 14 (78.812207ms)

                                                
                                                
-- stdout --
	* [multinode-874974-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17957
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17957-106484/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17957-106484/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-874974-m02' is duplicated with machine name 'multinode-874974-m02' in profile 'multinode-874974'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-874974-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:488: (dbg) Done: out/minikube-linux-amd64 start -p multinode-874974-m03 --driver=docker  --container-runtime=containerd: (23.216696342s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-874974
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-874974: exit status 80 (281.489242ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-874974
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-874974-m03 already exists in multinode-874974-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-874974-m03
multinode_test.go:500: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-874974-m03: (1.871705031s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (25.51s)

                                                
                                    
x
+
TestPreload (143.53s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-693033 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-693033 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m3.411733457s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-693033 image pull gcr.io/k8s-minikube/busybox
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-693033
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-693033: (11.836138291s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-693033 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
E0115 11:58:46.585981  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/functional-401444/client.crt: no such file or directory
E0115 11:59:00.836214  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/addons-391328/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-693033 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (1m5.092452924s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-693033 image list
helpers_test.go:175: Cleaning up "test-preload-693033" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-693033
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-693033: (2.245328443s)
--- PASS: TestPreload (143.53s)

                                                
                                    
x
+
TestScheduledStopUnix (97.93s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-653705 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-653705 --memory=2048 --driver=docker  --container-runtime=containerd: (21.945103994s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-653705 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-653705 -n scheduled-stop-653705
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-653705 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-653705 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-653705 -n scheduled-stop-653705
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-653705
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-653705 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0115 12:00:23.889752  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/addons-391328/client.crt: no such file or directory
E0115 12:00:45.270143  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/ingress-addon-legacy-139528/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-653705
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-653705: exit status 7 (80.111632ms)

                                                
                                                
-- stdout --
	scheduled-stop-653705
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-653705 -n scheduled-stop-653705
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-653705 -n scheduled-stop-653705: exit status 7 (77.013077ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-653705" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-653705
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-653705: (4.511735349s)
--- PASS: TestScheduledStopUnix (97.93s)

                                                
                                    
x
+
TestInsufficientStorage (9.98s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-378957 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-378957 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (7.612171021s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"503ea6a1-700c-4dad-8c7a-1c57aeb98719","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-378957] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e89cf99e-73cd-44dc-a64c-ec5c0b4463b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17957"}}
	{"specversion":"1.0","id":"8ffc0fe2-81c6-401f-b454-66368c98fee6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"11217eb3-8858-4aff-a9bf-27a743672390","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17957-106484/kubeconfig"}}
	{"specversion":"1.0","id":"fc466f43-9cf5-4106-8b11-d3b9c7e02665","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17957-106484/.minikube"}}
	{"specversion":"1.0","id":"3cdb744f-ddca-4412-b4be-8d72683cb17c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"923cf176-fcc6-4394-90d1-8d5acb983442","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"67eab42c-f563-47e6-a582-8bf39c8fe6b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"8360a410-19d1-4b58-b046-f4176363f4f4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"25dc7cc9-4ddc-415e-abe4-b035a4a2bc84","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"2c706bc4-3774-42cd-96df-66a9151a55df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"d083ea1a-8412-4752-b84c-dcc12828c099","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-378957 in cluster insufficient-storage-378957","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"9e4742a7-7f11-47c4-b1ce-51d6c34e7394","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1704759386-17866 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"a0f6738a-b502-4b0d-9f5e-cac5df1c45cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"262f6db3-2d43-4aec-8180-8142e1466407","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-378957 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-378957 --output=json --layout=cluster: exit status 7 (273.615163ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-378957","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-378957","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0115 12:01:04.677093  243013 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-378957" does not appear in /home/jenkins/minikube-integration/17957-106484/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-378957 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-378957 --output=json --layout=cluster: exit status 7 (273.0757ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-378957","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-378957","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0115 12:01:04.950763  243103 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-378957" does not appear in /home/jenkins/minikube-integration/17957-106484/kubeconfig
	E0115 12:01:04.960206  243103 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/insufficient-storage-378957/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-378957" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-378957
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-378957: (1.821076518s)
--- PASS: TestInsufficientStorage (9.98s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (57.76s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1438070482 start -p running-upgrade-496879 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1438070482 start -p running-upgrade-496879 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (28.792994418s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-496879 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-496879 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (26.309815771s)
helpers_test.go:175: Cleaning up "running-upgrade-496879" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-496879
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-496879: (2.23483894s)
--- PASS: TestRunningBinaryUpgrade (57.76s)

                                                
                                    
x
+
TestKubernetesUpgrade (344.46s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-806787 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-806787 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (47.765750827s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-806787
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-806787: (5.06344403s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-806787 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-806787 status --format={{.Host}}: exit status 7 (82.651666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-806787 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-806787 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m28.255026206s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-806787 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-806787 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-806787 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=containerd: exit status 106 (82.717619ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-806787] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17957
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17957-106484/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17957-106484/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-806787
	    minikube start -p kubernetes-upgrade-806787 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8067872 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-806787 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-806787 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-806787 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (20.522702853s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-806787" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-806787
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-806787: (2.629404473s)
--- PASS: TestKubernetesUpgrade (344.46s)

                                                
                                    
x
+
TestMissingContainerUpgrade (136.22s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.74572441 start -p missing-upgrade-490570 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.74572441 start -p missing-upgrade-490570 --memory=2200 --driver=docker  --container-runtime=containerd: (52.867072187s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-490570
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-490570
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-490570 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-490570 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m20.16119887s)
helpers_test.go:175: Cleaning up "missing-upgrade-490570" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-490570
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-490570: (2.087155345s)
--- PASS: TestMissingContainerUpgrade (136.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-508068 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-508068 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (97.096839ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-508068] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17957
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17957-106484/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17957-106484/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (39.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-508068 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-508068 --driver=docker  --container-runtime=containerd: (38.732588247s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-508068 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (39.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (15.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-508068 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-508068 --no-kubernetes --driver=docker  --container-runtime=containerd: (13.08915812s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-508068 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-508068 status -o json: exit status 2 (302.22846ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-508068","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-508068
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-508068: (2.013392139s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (15.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-508068 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-508068 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.942022452s)
--- PASS: TestNoKubernetes/serial/Start (7.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-508068 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-508068 "sudo systemctl is-active --quiet service kubelet": exit status 1 (278.69375ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-508068
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-508068: (1.20847087s)
--- PASS: TestNoKubernetes/serial/Stop (1.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-508068 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-508068 --driver=docker  --container-runtime=containerd: (6.261467932s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.26s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.36s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (91.43s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2592839326 start -p stopped-upgrade-628545 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2592839326 start -p stopped-upgrade-628545 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (29.589187264s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2592839326 -p stopped-upgrade-628545 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2592839326 -p stopped-upgrade-628545 stop: (1.232868415s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-628545 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-628545 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m0.602919691s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (91.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-508068 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-508068 "sudo systemctl is-active --quiet service kubelet": exit status 1 (296.655833ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.30s)

                                                
                                    
x
+
TestPause/serial/Start (49.86s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-476278 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-476278 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (49.859301351s)
--- PASS: TestPause/serial/Start (49.86s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.94s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-628545
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.94s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (5.49s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-476278 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-476278 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (5.472202765s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (5.49s)

                                                
                                    
x
+
TestPause/serial/Pause (0.72s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-476278 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.72s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.35s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-476278 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-476278 --output=json --layout=cluster: exit status 2 (346.7089ms)

                                                
                                                
-- stdout --
	{"Name":"pause-476278","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-476278","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.35s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.7s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-476278 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.70s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.83s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-476278 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-491727 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-491727 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (174.697138ms)

                                                
                                                
-- stdout --
	* [false-491727] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17957
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17957-106484/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17957-106484/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 12:04:20.667114  291472 out.go:296] Setting OutFile to fd 1 ...
	I0115 12:04:20.667276  291472 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 12:04:20.667294  291472 out.go:309] Setting ErrFile to fd 2...
	I0115 12:04:20.667302  291472 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 12:04:20.667597  291472 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17957-106484/.minikube/bin
	I0115 12:04:20.668223  291472 out.go:303] Setting JSON to false
	I0115 12:04:20.669393  291472 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":10010,"bootTime":1705310251,"procs":278,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0115 12:04:20.669463  291472 start.go:138] virtualization: kvm guest
	I0115 12:04:20.671678  291472 out.go:177] * [false-491727] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0115 12:04:20.673529  291472 out.go:177]   - MINIKUBE_LOCATION=17957
	I0115 12:04:20.673564  291472 notify.go:220] Checking for updates...
	I0115 12:04:20.674887  291472 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 12:04:20.676424  291472 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17957-106484/kubeconfig
	I0115 12:04:20.677961  291472 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17957-106484/.minikube
	I0115 12:04:20.679506  291472 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0115 12:04:20.680911  291472 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0115 12:04:20.682583  291472 config.go:182] Loaded profile config "cert-expiration-595061": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 12:04:20.682681  291472 config.go:182] Loaded profile config "kubernetes-upgrade-806787": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.0-rc.2
	I0115 12:04:20.682788  291472 config.go:182] Loaded profile config "pause-476278": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 12:04:20.682872  291472 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 12:04:20.709425  291472 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0115 12:04:20.709544  291472 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 12:04:20.771263  291472 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:65 SystemTime:2024-01-15 12:04:20.760982022 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0115 12:04:20.771370  291472 docker.go:295] overlay module found
	I0115 12:04:20.773324  291472 out.go:177] * Using the docker driver based on user configuration
	I0115 12:04:20.774729  291472 start.go:298] selected driver: docker
	I0115 12:04:20.774750  291472 start.go:902] validating driver "docker" against <nil>
	I0115 12:04:20.774765  291472 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0115 12:04:20.776995  291472 out.go:177] 
	W0115 12:04:20.778287  291472 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0115 12:04:20.779586  291472 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-491727 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-491727

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-491727

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-491727

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-491727

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-491727

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-491727

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-491727

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-491727

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-491727

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-491727

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491727"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491727"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491727"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-491727

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491727"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491727"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-491727" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-491727" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-491727" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-491727" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-491727" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-491727" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-491727" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-491727" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491727"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491727"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491727"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491727"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491727"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-491727" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-491727" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-491727" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491727"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491727"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491727"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491727"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491727"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17957-106484/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 15 Jan 2024 12:04:10 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-595061
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17957-106484/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 15 Jan 2024 12:02:51 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-806787
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17957-106484/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 15 Jan 2024 12:04:16 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.94.2:8443
name: pause-476278
contexts:
- context:
cluster: cert-expiration-595061
extensions:
- extension:
last-update: Mon, 15 Jan 2024 12:04:10 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: cert-expiration-595061
name: cert-expiration-595061
- context:
cluster: kubernetes-upgrade-806787
user: kubernetes-upgrade-806787
name: kubernetes-upgrade-806787
- context:
cluster: pause-476278
extensions:
- extension:
last-update: Mon, 15 Jan 2024 12:04:16 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: pause-476278
name: pause-476278
current-context: pause-476278
kind: Config
preferences: {}
users:
- name: cert-expiration-595061
user:
client-certificate: /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/cert-expiration-595061/client.crt
client-key: /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/cert-expiration-595061/client.key
- name: kubernetes-upgrade-806787
user:
client-certificate: /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/kubernetes-upgrade-806787/client.crt
client-key: /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/kubernetes-upgrade-806787/client.key
- name: pause-476278
user:
client-certificate: /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/pause-476278/client.crt
client-key: /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/pause-476278/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-491727

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491727"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491727"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491727"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491727"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491727"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491727"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491727"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491727"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491727"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491727"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491727"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491727"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491727"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491727"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491727"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491727"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491727"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491727"

                                                
                                                
----------------------- debugLogs end: false-491727 [took: 3.665224275s] --------------------------------
helpers_test.go:175: Cleaning up "false-491727" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-491727
--- PASS: TestNetworkPlugins/group/false (3.99s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.69s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-476278 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-476278 --alsologtostderr -v=5: (2.693659845s)
--- PASS: TestPause/serial/DeletePaused (2.69s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.51s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-476278
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-476278: exit status 1 (16.776411ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-476278: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (114.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-382069 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-382069 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (1m54.795031197s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (114.80s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (49.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-082992 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
E0115 12:05:09.629568  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/functional-401444/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-082992 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (49.980236585s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (49.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-082992 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fea8a35e-4713-436d-b2e3-7a7ee537ebf7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [fea8a35e-4713-436d-b2e3-7a7ee537ebf7] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003733214s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-082992 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.91s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-082992 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-082992 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.91s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-082992 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-082992 --alsologtostderr -v=3: (11.90409267s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.90s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-082992 -n embed-certs-082992
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-082992 -n embed-certs-082992: exit status 7 (96.798333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-082992 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (590.84s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-082992 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
E0115 12:05:45.270413  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/ingress-addon-legacy-139528/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-082992 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (9m50.463515131s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-082992 -n embed-certs-082992
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (590.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (7.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-382069 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e0e881f1-6985-4af5-a419-f459824d5b20] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e0e881f1-6985-4af5-a419-f459824d5b20] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 7.00364554s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-382069 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (7.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-382069 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-382069 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-382069 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-382069 --alsologtostderr -v=3: (11.872717933s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-382069 -n old-k8s-version-382069
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-382069 -n old-k8s-version-382069: exit status 7 (81.106536ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-382069 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (94.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-382069 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0
E0115 12:07:08.316547  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/ingress-addon-legacy-139528/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-382069 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (1m34.008378168s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-382069 -n old-k8s-version-382069
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (94.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (57.8s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-589195 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-589195 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (57.801816225s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (57.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (73.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-974470 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-974470 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (1m13.267258532s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (73.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-589195 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a001acca-9738-4d05-bc29-82bbecb4e68c] Pending
helpers_test.go:344: "busybox" [a001acca-9738-4d05-bc29-82bbecb4e68c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a001acca-9738-4d05-bc29-82bbecb4e68c] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.003839435s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-589195 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-t7cbs" [c32bb444-585a-4cee-9c81-42cef7b0f56c] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004549807s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-t7cbs" [c32bb444-585a-4cee-9c81-42cef7b0f56c] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003285142s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-382069 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-589195 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-589195 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (14.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-589195 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-589195 --alsologtostderr -v=3: (14.196983847s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (14.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-382069 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-382069 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-382069 -n old-k8s-version-382069
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-382069 -n old-k8s-version-382069: exit status 2 (308.338722ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-382069 -n old-k8s-version-382069
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-382069 -n old-k8s-version-382069: exit status 2 (297.944686ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-382069 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-382069 -n old-k8s-version-382069
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-382069 -n old-k8s-version-382069
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.73s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (36.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-097357 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-097357 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (36.553418399s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (36.55s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-589195 -n no-preload-589195
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-589195 -n no-preload-589195: exit status 7 (91.866714ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-589195 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (329.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-589195 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-589195 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (5m29.333293176s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-589195 -n no-preload-589195
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (329.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (6.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-974470 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e2e24085-af6c-42ee-a979-585830320bd1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0115 12:08:46.585828  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/functional-401444/client.crt: no such file or directory
helpers_test.go:344: "busybox" [e2e24085-af6c-42ee-a979-585830320bd1] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 6.004399558s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-974470 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (6.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-974470 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-974470 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.86s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-974470 --alsologtostderr -v=3
E0115 12:09:00.836365  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/addons-391328/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-974470 --alsologtostderr -v=3: (11.862937267s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.86s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-974470 -n default-k8s-diff-port-974470
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-974470 -n default-k8s-diff-port-974470: exit status 7 (79.42693ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-974470 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (331.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-974470 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-974470 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (5m31.593056162s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-974470 -n default-k8s-diff-port-974470
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (331.95s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.92s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-097357 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.92s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-097357 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-097357 --alsologtostderr -v=3: (1.198317728s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-097357 -n newest-cni-097357
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-097357 -n newest-cni-097357: exit status 7 (78.945452ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-097357 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (25.59s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-097357 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-097357 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (25.274188973s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-097357 -n newest-cni-097357
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (25.59s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-097357 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.66s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-097357 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-097357 -n newest-cni-097357
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-097357 -n newest-cni-097357: exit status 2 (304.072875ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-097357 -n newest-cni-097357
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-097357 -n newest-cni-097357: exit status 2 (298.283024ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-097357 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-097357 -n newest-cni-097357
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-097357 -n newest-cni-097357
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (48.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-491727 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-491727 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (48.512449223s)
--- PASS: TestNetworkPlugins/group/auto/Start (48.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-491727 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-491727 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-t79wn" [4b7b9d65-0d70-4e44-a43f-d07415041f05] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-t79wn" [4b7b9d65-0d70-4e44-a43f-d07415041f05] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.004148636s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-491727 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-491727 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-491727 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (48.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-491727 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E0115 12:11:22.618612  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/old-k8s-version-382069/client.crt: no such file or directory
E0115 12:11:22.623948  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/old-k8s-version-382069/client.crt: no such file or directory
E0115 12:11:22.634196  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/old-k8s-version-382069/client.crt: no such file or directory
E0115 12:11:22.654450  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/old-k8s-version-382069/client.crt: no such file or directory
E0115 12:11:22.694723  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/old-k8s-version-382069/client.crt: no such file or directory
E0115 12:11:22.775000  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/old-k8s-version-382069/client.crt: no such file or directory
E0115 12:11:22.935799  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/old-k8s-version-382069/client.crt: no such file or directory
E0115 12:11:23.256196  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/old-k8s-version-382069/client.crt: no such file or directory
E0115 12:11:23.897048  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/old-k8s-version-382069/client.crt: no such file or directory
E0115 12:11:25.177245  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/old-k8s-version-382069/client.crt: no such file or directory
E0115 12:11:27.737693  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/old-k8s-version-382069/client.crt: no such file or directory
E0115 12:11:32.858903  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/old-k8s-version-382069/client.crt: no such file or directory
E0115 12:11:43.099647  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/old-k8s-version-382069/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-491727 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (48.525124383s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (48.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-q6sxq" [de3a4798-8da2-4067-96de-8a1d025f2c3a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004534283s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-491727 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-491727 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-5z4k2" [e2fddce7-4f6b-4873-865a-be79ff44b3a5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-5z4k2" [e2fddce7-4f6b-4873-865a-be79ff44b3a5] Running
E0115 12:12:03.580442  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/old-k8s-version-382069/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.003991901s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-491727 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-491727 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-491727 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-491727 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
E0115 12:12:44.540690  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/old-k8s-version-382069/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-491727 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m4.998275574s)
--- PASS: TestNetworkPlugins/group/calico/Start (65.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-w7s7f" [2df31de3-bdc4-4743-979c-05a7e3e6830e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004810453s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-491727 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-491727 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-q7qxd" [ef76cba2-00d5-406f-b3d6-ff6128858ff7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-q7qxd" [ef76cba2-00d5-406f-b3d6-ff6128858ff7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.00427519s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-491727 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-491727 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-491727 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (54.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-491727 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E0115 12:14:06.461386  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/old-k8s-version-382069/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-491727 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (54.419134106s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (54.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-vvdx5" [f881918f-c8b1-415c-b0d8-f187fdb92174] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-vvdx5" [f881918f-c8b1-415c-b0d8-f187fdb92174] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.004252622s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-vvdx5" [f881918f-c8b1-415c-b0d8-f187fdb92174] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00418445s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-589195 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-589195 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.76s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-589195 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-589195 -n no-preload-589195
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-589195 -n no-preload-589195: exit status 2 (316.165962ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-589195 -n no-preload-589195
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-589195 -n no-preload-589195: exit status 2 (300.170215ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-589195 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-589195 -n no-preload-589195
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-589195 -n no-preload-589195
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (42.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-491727 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-491727 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (42.297354128s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (42.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-52tnz" [be84e477-3c14-4736-afe5-26cf0cf52a32] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-52tnz" [be84e477-3c14-4736-afe5-26cf0cf52a32] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.003781902s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (11.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-52tnz" [be84e477-3c14-4736-afe5-26cf0cf52a32] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003689604s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-974470 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-974470 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-974470 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-974470 -n default-k8s-diff-port-974470
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-974470 -n default-k8s-diff-port-974470: exit status 2 (361.976763ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-974470 -n default-k8s-diff-port-974470
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-974470 -n default-k8s-diff-port-974470: exit status 2 (308.43923ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-974470 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-974470 -n default-k8s-diff-port-974470
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-974470 -n default-k8s-diff-port-974470
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-491727 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-491727 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-xp84p" [eb4b3e58-f253-4406-bcaa-82500a941b57] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-xp84p" [eb4b3e58-f253-4406-bcaa-82500a941b57] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004190925s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (53.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-491727 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-491727 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (53.05014182s)
--- PASS: TestNetworkPlugins/group/flannel/Start (53.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-491727 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-491727 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-491727 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-491727 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-491727 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-rkwtq" [d4bacfcc-5da1-4496-a749-d8733799401f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-rkwtq" [d4bacfcc-5da1-4496-a749-d8733799401f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.006095085s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-491727 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-491727 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-491727 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (79.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-491727 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-491727 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m19.15823428s)
--- PASS: TestNetworkPlugins/group/bridge/Start (79.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-42cbz" [ecf6c536-591a-4958-8087-823842e04972] Running
E0115 12:15:33.287689  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/auto-491727/client.crt: no such file or directory
E0115 12:15:33.292993  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/auto-491727/client.crt: no such file or directory
E0115 12:15:33.303343  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/auto-491727/client.crt: no such file or directory
E0115 12:15:33.323708  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/auto-491727/client.crt: no such file or directory
E0115 12:15:33.363996  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/auto-491727/client.crt: no such file or directory
E0115 12:15:33.444320  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/auto-491727/client.crt: no such file or directory
E0115 12:15:33.604892  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/auto-491727/client.crt: no such file or directory
E0115 12:15:33.925640  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/auto-491727/client.crt: no such file or directory
E0115 12:15:34.566552  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/auto-491727/client.crt: no such file or directory
E0115 12:15:35.847632  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/auto-491727/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00401307s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-42cbz" [ecf6c536-591a-4958-8087-823842e04972] Running
E0115 12:15:38.407925  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/auto-491727/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004077716s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-082992 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-082992 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-082992 --alsologtostderr -v=1
E0115 12:15:43.528949  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/auto-491727/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-082992 -n embed-certs-082992
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-082992 -n embed-certs-082992: exit status 2 (306.651803ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-082992 -n embed-certs-082992
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-082992 -n embed-certs-082992: exit status 2 (354.53007ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-082992 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-082992 -n embed-certs-082992
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-082992 -n embed-certs-082992
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-z6gnx" [98a79e6c-497e-4a27-9aff-86113197ebd0] Running
E0115 12:15:53.769119  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/auto-491727/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.0041216s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-491727 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-491727 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-hqfxk" [98984f21-383b-468d-a1cd-2ecf68b16cfb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-hqfxk" [98984f21-383b-468d-a1cd-2ecf68b16cfb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.004219186s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-491727 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-491727 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-491727 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-491727 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-491727 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-79bpt" [9cd6328b-7189-4343-8c04-6e55a60f2b4b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0115 12:16:49.064988  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/kindnet-491727/client.crt: no such file or directory
E0115 12:16:49.070257  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/kindnet-491727/client.crt: no such file or directory
E0115 12:16:49.080493  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/kindnet-491727/client.crt: no such file or directory
E0115 12:16:49.100792  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/kindnet-491727/client.crt: no such file or directory
E0115 12:16:49.141098  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/kindnet-491727/client.crt: no such file or directory
E0115 12:16:49.221450  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/kindnet-491727/client.crt: no such file or directory
E0115 12:16:49.381889  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/kindnet-491727/client.crt: no such file or directory
E0115 12:16:49.702684  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/kindnet-491727/client.crt: no such file or directory
E0115 12:16:50.302517  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/old-k8s-version-382069/client.crt: no such file or directory
E0115 12:16:50.343700  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/kindnet-491727/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-79bpt" [9cd6328b-7189-4343-8c04-6e55a60f2b4b] Running
E0115 12:16:51.624819  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/kindnet-491727/client.crt: no such file or directory
E0115 12:16:54.185142  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/kindnet-491727/client.crt: no such file or directory
E0115 12:16:55.210057  113285 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/auto-491727/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.003955094s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-491727 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-491727 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-491727 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.10s)

                                                
                                    

Test skip (26/320)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-195074" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-195074
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-491727 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-491727

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-491727

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-491727

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-491727

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-491727

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-491727

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-491727

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-491727

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-491727

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-491727

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491727"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491727"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491727"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-491727

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491727"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491727"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-491727" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-491727" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-491727" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-491727" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-491727" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-491727" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-491727" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-491727" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491727"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491727"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491727"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491727"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491727"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-491727" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-491727" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-491727" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491727"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491727"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491727"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491727"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491727"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17957-106484/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 15 Jan 2024 12:04:10 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-595061
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17957-106484/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 15 Jan 2024 12:02:51 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-806787
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17957-106484/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 15 Jan 2024 12:04:16 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.94.2:8443
name: pause-476278
contexts:
- context:
cluster: cert-expiration-595061
extensions:
- extension:
last-update: Mon, 15 Jan 2024 12:04:10 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: cert-expiration-595061
name: cert-expiration-595061
- context:
cluster: kubernetes-upgrade-806787
user: kubernetes-upgrade-806787
name: kubernetes-upgrade-806787
- context:
cluster: pause-476278
extensions:
- extension:
last-update: Mon, 15 Jan 2024 12:04:16 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: pause-476278
name: pause-476278
current-context: pause-476278
kind: Config
preferences: {}
users:
- name: cert-expiration-595061
user:
client-certificate: /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/cert-expiration-595061/client.crt
client-key: /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/cert-expiration-595061/client.key
- name: kubernetes-upgrade-806787
user:
client-certificate: /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/kubernetes-upgrade-806787/client.crt
client-key: /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/kubernetes-upgrade-806787/client.key
- name: pause-476278
user:
client-certificate: /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/pause-476278/client.crt
client-key: /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/pause-476278/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-491727

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491727"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491727"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491727"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491727"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491727"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491727"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491727"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491727"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491727"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491727"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491727"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491727"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491727"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491727"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491727"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491727"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491727"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491727"

                                                
                                                
----------------------- debugLogs end: kubenet-491727 [took: 3.535667251s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-491727" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-491727
--- SKIP: TestNetworkPlugins/group/kubenet (3.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-491727 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-491727

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-491727

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-491727

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-491727

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-491727

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-491727

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-491727

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-491727

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-491727

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-491727

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491727"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491727"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491727"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-491727

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491727"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491727"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-491727" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-491727" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-491727" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-491727" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-491727" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-491727" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-491727" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-491727" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491727"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491727"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491727"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491727"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491727"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-491727

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-491727

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-491727" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-491727" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-491727

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-491727

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-491727" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-491727" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-491727" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-491727" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-491727" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491727"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491727"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491727"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491727"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491727"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17957-106484/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 15 Jan 2024 12:04:10 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-595061
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17957-106484/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 15 Jan 2024 12:02:51 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-806787
contexts:
- context:
cluster: cert-expiration-595061
extensions:
- extension:
last-update: Mon, 15 Jan 2024 12:04:10 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: cert-expiration-595061
name: cert-expiration-595061
- context:
cluster: kubernetes-upgrade-806787
user: kubernetes-upgrade-806787
name: kubernetes-upgrade-806787
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-595061
user:
client-certificate: /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/cert-expiration-595061/client.crt
client-key: /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/cert-expiration-595061/client.key
- name: kubernetes-upgrade-806787
user:
client-certificate: /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/kubernetes-upgrade-806787/client.crt
client-key: /home/jenkins/minikube-integration/17957-106484/.minikube/profiles/kubernetes-upgrade-806787/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-491727

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491727"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491727"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491727"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491727"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491727"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491727"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491727"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491727"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491727"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491727"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491727"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491727"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491727"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491727"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491727"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491727"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491727"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-491727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491727"

                                                
                                                
----------------------- debugLogs end: cilium-491727 [took: 3.770222748s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-491727" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-491727
--- SKIP: TestNetworkPlugins/group/cilium (3.93s)

                                                
                                    
Copied to clipboard